Modernizing the Global Approximation Method of Posterior

Journal of Siberian Federal University. Engineering & Technologies, 2016, 9(4), 462-469
~~~
УДК 623.396
Modernizing the Global Approximation Method
of Posterior Probability Density
for Oscillator Synchronization Purposes
Alexander V. Ryabov*,
Pavel A. Fedyunin and Maxim Y. Presnyakov
Military Training and Research Center of the Air Force
«Air Force Academy ft. Professor N.E. Zhukovsky and Y.A. Gagarin»
54a Starykh Bol’shevikov Str., Voronezh, 394064, Russia
Received 16.02.2016, received in revised form 09.03.2016, accepted 06.03.2016
The article introduces modernizing the global approximation method of the posterior probability
density function in the implementation of phase-locked loop algorithms which improves the
accuracy of clock synchronization in telecommunication networks, and reduces the time of
entering into synchronism.
Keywords: clock synchronization; phase-locked loop; the global approximation; filtering;
synchronism.
Citation: Ryabov A.V., Fedyunin P.A., Presnyakov M.Y. Modernizing the global approximation method of posterior
probability density for oscillator synchronization purposes, J. Sib. Fed. Univ. Eng. technol., 2016, 9(4), 462-469.
DOI: 10.17516/1999-494X-2016-9-4-462-469.
*
© Siberian Federal University. All rights reserved
Corresponding author E-mail address: [email protected]
– 462 –
Alexander V. Ryabov, Pavel A. Fedyunin… Modernizing the Global Approximation Method of Posterior Probability…
Модернизация метода интегральной аппроксимации
апостериорной плотности вероятности
в задачах тактовой синхронизации генераторов
А.В. Рябов, П.А. Федюнин, М.Ю. Пресняков
ВУНЦ ВВС «ВВА им. проф. Н.Е. Жуковского и Ю.А. Гагарина»
Россия, 371600, Воронеж, ул. Старых Большевиков, 54а
Предложена модернизация метода интегральной аппроксимации апостериорной плотности
распределения вероятности при реализации алгоритмов фазовой автоподстройки частоты,
позволяющая повысить точность тактовой синхронизации в телекоммуникационных сетях и
уменьшить время вхождения в синхронизм.
Ключевые слова: тактовая синхронизация, фазовая автоподстройка частоты, интегральная
аппроксимация, фильтрация, синхронизм.
Introduction
In modern telecommunication networks the quality of data transmission largely depends upon
the accuracy of clock synchronization. Furthermore, in high-speed networks in order to provide the
minimum synchronization time at the start and the minimum time of a sync recovery process in
case of a breakdown, the synchronization requirements are becoming more severe. In addition, if the
sync mode is continuous and automated, the high stability of the network element synchronism is
required.
Solving oscillator clock synchronization problems in telecommunication networks is normally
based upon designing a phase-locked loop system of oscillator frequency at the receiver site. Moreover,
for a phase-locked loop implementation the phase of the received signal should be estimated [1].
Let the signal produced by a reference oscillator at the transmitter site of a telecommunication
channel, be a harmonic motion
S (t ,ϕ ) = A sin (ω0t + ϕ (t )).
(1)
where – π ≤ φ(t) ≤ π denotes a random Wiener phase, defined by the equation:
d ϕ (t ) d t = nϕ (t ),
(2)
where nφ(t) is the additive white Gaussian noise (AWGN) determined by the internal reference oscillator
noise, with zero mean and the correlation function M[nφ(t1)·nφ(t2)] = (Nφ /2)·δ(t2 – t1).
At the receiver site of the communication channel there is a mixture of desired signal and noise:
ξ (t )= S (t ,ϕ )+ n0 (t ),
(3)
where n0(t) is the AWGN determined by external noise, with the expected value with zero mean and the
correlation M[n0(t1)·n0(t2)] = (N0/2)·δ(t2 – t1) [1].
In order to create a self-oscillator phase-locked loop system at the receiver site it is necessary to
obtain an optimal filtering algorithm for a received signal phase φ(t). Let the highest posterior probability
– 463 –
and the correlation M[n0(t1)·n0(t2)] = (N0/2)·δ(t2 - t1) [1].
In order to create a self-oscillator phase-locked loop system at the receiver site it is
necessary to obtain an optimal filtering algorithm for a received signal phase φ(t). Let the highest
Alexander V. Ryabov, Pavel A. Fedyunin… Modernizing the Global Approximation Method of Posterior Probability…
posterior probability density of a random signal phase and the minimum mean square filtering error
density
a randomcriteria.
signal phase
minimum
mean the
square
filtering
be optimality
criteria.
be of
optimality
Then and
it is the
necessary
to solve
problem
of aerror
received
signal phase
filtering
Then it is necessary to solve the problem of a received signal phase filtering for a small signal-to-noise
for a small signal-to-noise ratio.
ratio.
Since the Equation in (3) is nonlinear in relation to phase φ(t), it is essential to solve the task
Since the Equation in (3) is nonlinear in relation to phase φ(t), it is essential to solve the task of
of nonlinear
Wiener
phaseInfiltering.
In theory
finding isthe
solution
is based
upon solving
nonlinear
Wiener phase
filtering.
theory finding
the solution
based
upon solving
Stratonovich
Stratonovich SDEs.
SDEs.
Nevertheless, an accurate solution to the nonlinear filtering equations can be obtained in few
π
⎡
⎤
cases [1, 2]. Therefore, it∂ is
focus
) ∂ ϕ 2 + ⎢ Fapproximate
(t , ϕ )− ∫ F (t ,nonlinear
p (tessential
, ϕ ) ∂ t = 0to.25
N ϕ ∂ 2upon
p (t , ϕdeveloping
ϕ ) p (t , ϕ ) filtering
dϕ ⎥ p (t , ϕ ) ; (4) (4)
−π
⎣
⎦
equation solutions.
−1
(1t ()ξ−(tS)(−t ,ϕSthe()t),2ϕexisting
)ξN(0t−1)Nsin−1(methods
F (compare
t ,F
ϕ()t=
≈))22 A≈ξ2(A
tsolution
ω
+ ϕt (+t )of
−
(5) (5)
02 (
0 t(ω
The aim of this paper is to
,ϕ2)N=and
Nξ analyze
sin
ϕ).(tnonlinear
)) .
0
0
0
Nevertheless,
an accurate
solution
the nonlinear
filteringmethod
equations
canposterior
be obtained in few
Stratonovich SDEs
and to suggest
modernizing
the to
general
approximation
of the
2
[1,function,
2]. Therefore,
essentialthe
to focus
uponofdeveloping
approximate
nonlinear
probability cases
density
whichit is
increases
accuracy
signal phase
filtering and
reducesfiltering
the equation
solutions.
time of entering into synchronism.
The aim of this paper is to compare and analyze the existing solution methods of nonlinear
Stratonovich SDEs and to suggest modernizing the general approximation method of the posterior
1. ANALYSIS
OF THE
NONLINEAR
EQUATION
SOLUTION
probability density
function,
which increasesFILTERING
the accuracy of
signal phase filtering
and reduces the
TECHNIQUES
A RANDOM
RECEIVED SIGNAL PHASE
time FOR
of entering
into synchronism.
The most common technique is synthesizing algorithms of nonlinear filtering using the ways
1. Analysis of the nonlinear filtering equation solution techniques
of approximation to the posterior probability density p(t, φ), some function p(t, φ, α) belonging to
for a random received signal phase
the parameterized class α∈Ψ [1].
The most common technique is synthesizing algorithms of nonlinear filtering using the ways of
The approximation
techniques of toapproximating
to the posterior
function
canbelonging
be
the posterior probability
density probability
p(t, φ), somedensity
function
p(t, φ, α)
to the
subdivided into
two categories:
parameterized
class the
α∈Ψlocal
[1]. density approximation and the global approximation [1].
techniques approximate
of approximating
to thealgorithms
posterior probability
density
can be subdivided
In local The
approximation
filtering
are obtained
due function
to an accurate
)
into two categories:
the set
local
approximation
approximation
[1].
solution approximation
for a small
ofdensity
assessment
values ϕ (t )and
of the
the global
filtering
parametric variable.
In local approximation approximate filtering algorithms are obtained due to an accurate solution

This approach allows to obtain functional algorithms in special cases.
approximation for a small set of assessment values ϕ (t ) of the filtering parametric variable. This
Normally
the allows
local to
density
is basedin upon
approach
obtainapproximation
functional algorithms
specialreplacing
cases. a probability density
)
the localdensity
density function
approximation
is based distribution
upon replacing
a (probability
density function
p (ϕ
t ) R(t )) , where
function p(t, φ) Normally
by a probability
of a normal


)
p(t, φ) by a probability density function of a normal distribution p (ϕ (t ) R(t )), where − π < ϕ (t )< π ,
− π < ϕ (t ) < π , 0 < R(t ) < ∞ .
0 < R(t )< ∞ .
Let us look
optimal
based upon
thisupon
approach.
Letatussome
lookquasi
at some
quasialgorithms
optimal algorithms
based
this approach.
1. The extended
Kalman
filter
(EKF)
1. The extended Kalman filter (EKF)
This
quasi algorithm
optimal algorithm
based
on reducing
the initial
nonlinear
filtering
This quasi
optimal
is basedison
reducing
the initial
nonlinear
filtering
task totask
a to a linear
due
to
expanding
nonlinear
functions
which
are
part
of
observation
equations
and
signal
linear due to expanding nonlinear functions which are part of observation equations and signal message

equations in a Taylor series a close vicinity of the estimation value)ϕ [1].
message equations in a Taylor series a close vicinity of the estimation value ϕ [1].
Consequently, the algorithm of phase filtering is as follows:
Consequently, the algorithm of phase filtering is as follows:

) 
R(−t1)cos
N 0−1(ω
cost (+ωϕ0)t(+
(6)
) ;(t ) );
(6)
d ϕ ddϕt =d2tA=ξ2(tA) ξR((tt ))N
t )ϕ
0
0
dR(t ) dt = Nϕ 2 − A2 R 2 (t )N 0−1 . (7)
2. Local approximation
techniquestechniques
based on finding
solutions solutions
to Stratonovich
SDEs. SDEs.
2. Local approximation
based onapproximate
finding approximate
to Stratonovich
To obtain a solution the posterior probability density
logarithm
p(t, φ) is to be expanded in a
– 464
–
Taylor series near the tentative estimate ϕ) corresponding to the highest posterior probability
density [1]. The equations of phase filtering estimation and mean square filtering error can be
represented in the following way:
(7)
Alexander V. Ryabov, Pavel A. Fedyunin… Modernizing the Global Approximation Method of Posterior Probability…
To obtain a solution the posterior probability density logarithm p(t, φ) is to be expanded in a
)
)
1
d ϕposterior
d t = 2 Aprobability
ξ (t ) R (t )N 0−density
cos(ω0t + ϕ (t ) ) ;
Taylor series near the tentative estimate ϕ corresponding to the highest
)
[1]. The equations of phase filtering estimation and mean square filtering
dR(t ) dt error
= Nϕ can
2 − be
A2 represented
R 2 (t )ξ (t )N 0−1insin (ω0t + ϕ (t ) ) .
the following way:
)
)
−1
drawback
techniques
for the po
)
)main
(t )NThe
(8)
t(t)=) R2(A
ξN(t−1) R
cos
(t ) ) ;) of local approximation
1 )0)t; + ϕ
−(tω
0t +
)
(8)
d ϕd dϕt d=d t2ϕ
t
cos
ϕ
(
−1 (ω−
)=Ad2ξdA
)
1
0
0
(
)
(
)
(
(8)
ϕ
d
t
=
2
A
ξ
t
R
t
N
cos
ω
ϕ (t ) ) ;
)Rξ((ttis))NRthe
(
)
ω
t
+
ϕ
(
t
)
;
(8)
0ω
0tt)+
(
)
;
(8)
d ϕ d t =ξ 2(tA
N
cos
t
+
ϕ
(
0(t )cos
0
0
0
fact
that−1 they
can only
) be applied when filtering errors are minimal,
2
) (−ω
−t1)ξ (t )N
(
dt2 −
=A
N2ϕ R22 (−t )ξA(2t R
sin
(t ) ) .)
(9)
2 sin
2 (ω 0t +
1) 0)t. + ϕ
)
dR(t ) dtdR
= (Nt )ϕdR
N
ϕ
(
t
(9)
)
22 −2 A
)N(0ω0sin
).
ω(0tt)main
+) .ϕ (t )filtering
the
equation is(9)
required
)ξR(t )(Nt )ξ00−1(tsin
(9)(9) only in a c
dR(t ) dt (=t ) Ndtϕ =2N
−ϕAgood
R (0tapproximation
tof+(ϕ
) the posterior
The
main drawback
local approximation
techniques
for
probability
density
The main
drawback
of drawback
local of
approximation
techniques
fortechniques
the posterior
probability
density
parametric
variable
.for
This
drawback
makes
itdensity
impossible
ϕ (t )the
The main
of local
approximation
the
posterior
probability
density to apply this
The main
drawback
of local
approximation
techniques
for
posterior
probability
The
main
drawback
of
local
approximation
techniques
for
the
posterior
probability
density
is
the
is
the
fact
that
they
can
only
be
applied
when
filtering
errors
are
minimal,
i.e.
the
SNR
is
high,
and
is the fact thatisthey
can that
onlythey
be applied
when
filtering
errors
are
minimal,
i.e. minimal,
the
SNR is
high,
and[2].
ais high,aand
thethat
fact
can be
only
be applied
filtering
errors
are
i.e.
the is
SNR
a
towhen
solving
many
important
practical
problems
is the
fact
they
can
only
applied
when
filtering
errors
are
minimal,
i.e.
the
SNR
high,
and
a
factapproximation
that they can only
bemain
applied
when equation
filtering errors
are minimal,
the SNR
is high,
a good
good
of the
filtering
is required
only
in i.e.
a close
vicinity
of a and
filtering
good approximation
of
the
main
filtering
equation
is
required
only
in
a
close
vicinity
of
a
filtering
good
approximation
ofmain
the main
filtering
is
required
only
inapproximate
a close
vicinity
of a filtering
In
approximation
algorithms are obta
good
approximation
of
filtering
equation
is global
required
in
avicinity
close
vicinity
offiltering
a filtering
approximation
of the
main
filtering
equation
isequation
required
only
inonly
a close
of a filtering
parametric
) the
)
parametric
variable
.
This
drawback
makes
it
impossible
to
apply
this
method
of
approximation
ϕ
(t
)
)
makes
it impossible
toimpossible
apply this
ofthis
approximation
parametric variable
ϕ (t ) . This
) drawback
parametric
This
drawback
makes
it
toφ)apply
method
of approximation
(t ) . drawback
an accurate
p(t,method
to some
probability
density
function p(t, φ, α)
variable
This
makes
itmakes
impossible
tosolution
applytothis
method
of approximation
to solving
ϕ (t ) .variable
parametric
variable
it impossible
apply
this
method
of approximation
ϕ (tdrawback
) . ϕThis
to
solving
many
important
practical
problems
[2].
to solving many
[2].problems
important
practical
problems
[2].
physical
representations. The solution is sought within the whole range
to solving
important
practical
to solving
manymany
important
practical
problems
[2]. [2].
In
global
approximation
approximate
filtering
algorithms
areare
obtained
due
to to
approximating
approximation
approximate
filtering
algorithms
obtained
due
approximating
In global approximation
approximate
filtering
algorithms
arealgorithms
obtained
due
to
approximating
filtering
parametric
variable
φ. obtained
A certain
integral
criterion is required, wh
In global
approximation
approximate
filtering
are
to approximating
In global
approximation
approximate
filtering
algorithms
are obtained
due todue
approximating
an
accurate
solution
p(t, φ)
to
some
probability
density
function
p(t, φ, α)
chosen
in accordance
an accurate
solution
p(t,
φ) to
some probability
density function
p(t,
φ, α) in
chosen
in accordance
with
an accurate
solution
p(t,
φ)
to
some
probability
density
function
p(t,
φ,
α)
chosen
accordance
with
for density
a small
signal-to-noise
ratio
The
minimum
Kullback-Leibler
dive
an accurate
solution
φ) to some
probability
density
function
φ, [2].
α) chosen
in accordance
with
an with
accurate
solution
p(t, φ)p(t,
to some
probability
function
p(t,the
φ,p(t,
α)
chosen
in accordance
with
physical
representations.
The
solution
is
sought
within
whole
range
of
possible
values
physical
representations.
The
solution
is
sought
within
the
whole
range
of
possible
values
of
the
physical representations.
The solution The
is sought
within
the
whole
range
of
possible
values
of thevaluesloss
such
awithin
criterion,
as
ensures
information
due to the po
representations.
solution
is sought
within
theit whole
range
of possible
physical
The solution
is
the whole
range
ofminimal
possible
values
of theof the
of physical
the representations.
filtering
parametric
variable
φ. sought
A certain
integral
criterion
is required,
which
is especially
filtering
parametric
variable
φ.
A
certain
integral
criterion
is
required,
which
is
especially
important
filtering parametric
variable
φ. Avariable
certain integral
criterion
is required,
which
isadvantage
especiallyofimportant
approximation
this
criterion
consists in emp
filtering
parametric
φ. A certain
integral
criterion
isThe
required,
is especially
important
important
for
a small
signal-to-noise
ratio
[2].
The
minimum
Kullback-Leibler
divergence
can
filtering
parametric
variable
φ. A certain
integral
criterion
is[3].
required,
whichwhich
is especially
important
for
a
small
signal-to-noise
ratio
[2].
The
minimum
Kullback-Leibler
divergence
can
be
regarded
as
for a smallbe
signal-to-noise
ratio
Theratio
minimum
Kullback-Leibler
can
besignificance.
regarded
as
distribution
and
indivergence
increasing
their
to this criteri
regarded
as
such [2].
a ratio
criterion,
as minimum
itThe
ensures
minimal
information
loss
duebeto
the
posterior
for
a small
signal-to-noise
[2].
minimum
Kullback-Leibler
divergence
can
beAccording
regarded
for a small
signal-to-noise
[2]. The
Kullback-Leibler
divergence
can
regarded
as as
such
a
criterion,
as
it
ensures
minimal
information
loss
due
to
the
posterior
probability
density
such a criterion,
asa criterion,
it density
ensures as
minimal
information
loss
due
to the
posterior
probability
density
probability
approximation
[3]. density
The
advantage
of
this
criterion
consists
in
emphasizing
α is
function
a parametric
variable
chosen
in such
a way that it m
it ensures
minimal
information
loss
the
posterior
probability
density
such such
a criterion,
as it ensures
minimal
information
loss due
todue
thetoposterior
probability
density
approximation
[3].advantage
The advantage
this criterion
in emphasizing
the
tails
of
the
the tails
ofThe
the
distribution
and
inof
increasing
their consists
significance.
According
to
this
criterion
for
approximation
[3].
of
this
criterion
consists
in
emphasizing
the
tails
of
the
approximation
[3]. advantage
The advantage
of criterion
this criterion
consists
in emphasizing
the tails
⎧ ∞the tails
⎫
approximation
[3]. The
of this
consists
in) emphasizing
of theof the
−1
(
)
(
)
α
t
min
p
t
,
ϕ
ln
[ p (that
t , ϕ , αit) p (t , ϕ )]dϕ ⎬ .
=
−
a
fitting
probability
density
function
a
parametric
variable
is
chosen
in
such
a
way
distribution
and
in
increasing
their
significance.
According
to
this
criterion
for
a
fitting
probability
⎨
∫
distribution and
in
increasing
their
significance.
According
to
this
criterion
for
a
fitting
probability
α
distribution
in increasing
their significance.
According
this criterion
for
probability
− ∞ a fitting
⎭
distribution
andtheinand
increasing
their significance.
According
to thistocriterion
for ⎩a fitting
probability
minimizes
[3]:variable
α is in
density
function
a integral
parametric
chosen
inway
suchthat
a way
that it minimizes
the integral
[3]:
α
density function
a
parametric
variable
is
chosen
such
a
it
minimizes
the
integral
[3]:
α is chosen
density
function
a parametric
variable
in
athat
wayitthat
it minimizes
theminimum
integral
α is chosen
density
function
a parametric
variable
in
such
way
minimizes
the
[3]: [3]:
Here
is asuch
the
necessary
condition
for integral
the
Kullback-Leibler
∞
∞
⎧
⎫
)
∞
−
1
⎧
⎫
)
∞⎫
−1
(10)(10)
[ p) )(ln
tp, (ϕt ,,ϕ
α (t ) = min
t⎧⎨, −
ϕ ∞)−∫1ln⎧⎨p[−(pt ,(ϕ
t ,p)ϕ(ln
)α (t ) ⎨=)−min
(t∫)α =p−1(min
t, ,[αϕ
[ p, αα(t))),]ϕdpp,ϕ(α(tt⎬,),ϕ.ϕp))]](ddt ,ϕϕϕ⎫⎬⎬⎭).].d ϕ∫ ⎬∂ [. p (t , ϕ , α )(10)
]
(
∂
α
p
t
, ϕ )ln(10)
dϕ = 0 .
)
(
(10)
αα(t ) =⎩αmin
ϕ
ln
p
t
,
ϕ
−
−∫∞p (t , ∫
⎩
⎨
α
−α
∞
⎭
−∞
⎩
⎭
−
∞
⎩ −∞
⎭
Here
is the necessary
condition
for
Kullback-Leibler
divergence
[3]:
for the
the minimum
minimum
Kullback-Leibler
divergence
[3]:
Here is the
necessary
condition
for condition
the minimum
Kullback-Leibler
divergence
[3]:
Here
the
necessary
for
the
minimum
Kullback-Leibler
divergence
[3]:
Then
the general
equation
for
parametric
variables of the fitting p
Here is
the is
necessary
condition
for
the
minimum
Kullback-Leibler
divergence
[3]:
∞
∞
dϕ = 0 .
0ln
(t.,dϕϕ)ln
∫ ∂ [ p (t , ∫ϕ∫∂∂, α[[pp)](∫(tt∂,∂,αϕϕ[ p,p,αα((t)t),],]ϕϕ∂∂,α)ααln)pp]d((t∂ϕt,,αϕϕ=p))ln
= 0dϕ. = 0 .
can be represented as follows:
∞
(11)(11)
(11)
(11)
(11)
−∞
−∞
⎡
⎤
{
(
)
}
{p(t , ϕ , α )}⎤ ⋅ F (
∂
∂
ln
p
t
,
ϕ
,
α
ln
⎡
⎤
⎡
+
−∞
+M⎢
M ⎢ Lprobability
the general
general
equation
for parametric
parametric
variables
ofthe
the
fitting
density
function
⎥
⎢
⎥
⎥⎦
Then theThen
general
equation
for
parametric
variables
of
the
fitting
probability
density
function
Then
the
equation
for
variables
of
fitting
probability
density
function
can
∂α
∂α
∂α the ⎣fitting
⎣ probability
⎦ ⎦function
⎣function
the general
equation
for parametric
variables
density
Then Then
the general
equation
for parametric
variables
of theoffitting
probability
density
=
2
canbeberepresented
represented
follows:
asasfollows:
∂t
⎡ ∂ ln{p(t , ϕ , α )}⎤
can be represented
follows:
beasrepresented
as follows:
M ⎢can becan
represented
as follows:
⎥
∂α ∂α T
⎣
⎦
{)p}(⎤t , ϕ , α )}⎤ ⋅ F (t , ϕ )
, α )}⎡⎤∂⎤ln{p(t⎡,∂ϕln
⎡ + ⎡ ∂Mln⎡ {Lp+(t⎡,⎡∂ϕln
⎤, ϕ
, α{⎡)p∂}(⎤tln
,
α
+
M
⎤
{
(
)
}
{
(
)
}
∂
p
t
,
ϕ
,
α
ln
p
t
,
ϕ
,
α
⎡p(t ,⋅ϕF,(αt ,)}ϕ⎤⎥)
⎤
M ⎢∂αL ⎢ ⎡⎢ +M⎡⎢∂ L
{
∂ ln
ln+{p∂(⎥tα,⎥ϕ+, M
α )}⎢⎤⎥⎤⎥ ⎤ ⎡⎢+
+
(
)
⋅
M
F
t
,
ϕ
⎥
∂
α
⎦⋅ F (t⎥, expectation;
ϕ)
where
M[·]
L [·] is (12)
an operator that is th
⎥∂⎥⎣α is ⎢⎣a mathematical
∂α
⎣M ⎣ L ∂⎣α⎢ ⎢ ⎦
⎣⎦ ⎦ + M
⎦
= ∂⎣∂αt = ∂α⎢⎣ = ⎢⎣ ⎣ ⎣ ∂α2⎦ ∂⎡ α ⎥⎦2⎥⎦ ⎦ ⎦⎢⎣
(12)
∂⎤α ∂α ⎥⎦ , ⎦ ,
,
(12)
{
(
)
}
∂
ln
p
t
,
ϕ
,
α
=
(12)(12)
∂t
⎡ ∂ ln{p-(t ,2ϕ⎡, α∂)}2⎤ln{p(t , ϕ , α )Equation
[3] ,
∂t ∂t
M ⎢- M⎡⎢Planck-Kolmogorov
∂MTln-∂{αp(∂⎥tα
, ϕT , α )}⎤⎥ }⎤
⎥
M∂⎢α⎣-∂α ⎢ ⎦ ∂T α ∂α ⎥T⎦
⎣
⎦ two known types of general approximation [2, 4
∂α us look
⎣ + ⎣∂αLet
⎦ at the
+
where
M[·]
is
a
mathematical
expectation;
L
[·]
is
an
operator
that
is the backward
dual Fokker+
where M[·]where
is a mathematical
expectation;
L
[·] is Lan
that
isthat
theisbackward
dual
Fokker+ operator
The
approximating
probability
density
is found
in the norma
M[·]
is
a mathematical
expectation;
[·]
is
an
operator
isbackward
the
backward
dual
Fokkeran
operator
backward
dual
Fokker-PlanckM[·]
isa amathematical
mathematical
expectation;
wherewhere
M[·]
is
expectation;
L+[·]
[·] isL
is1.
an
operator
that the
isthat
the
dual
FokkerPlanck-Kolmogorov
Equation
[3]
)
Planck-Kolmogorov
Equation
[3][3]
Kolmogorov
Equation
p (ϕ (t ) R(t )) [2, 4]:
Planck-Kolmogorov
Equation
Planck-Kolmogorov
Equation
[3] [3]
Let
us
look
at the
the
twotypes
known
types
of general
general
approximation
[2,4].
Let at
usLet
look
at
two
known
types
of
approximation
Let us look
the
two
known
of
general
approximation
[2, 4]. [2,
) 4].[2, 4].
) 2
us at
look
theknown
two known
of general
approximation
Let us look
the at
two
types types
of general
approximation
) R(probability
p ([2,
ϕ (t4].
t )) = 1 2π R(t ) exp − (ϕ (t ) − ϕ (t )) 2 R(t )
1. The
approximating
probability
density
is
found
in
the
normal
probability
density
class
1.
The
approximating
probability
density
is
found
in
the
normal
density
class
1. The approximating
probability density
is found
in the
normalinprobability
density
class density class
1.approximating
The approximating
probability
density
is found
the normal
probability
The
probability
density
isInfound
in the
normal
probability
density
class
) (ϕ 1.
(
)
(
)
)
p
t
R
t
this
case
the
random
Wiener
phase
filtering
algorithm will be th
[2,
4]:
)
t ) R4]:
)(t )) [2, 4]:
p (ϕ (t ) Rp(t()ϕ
)) ([2,
(
(
)
(
)
)
p
ϕ
t
R
t
[2,
4]:
p (ϕ (t ) R(t )) [2, 4]:
– 465 –
)
4 (13)
)
)(ϕ (t2) − ϕ) (t ))2 ) 2 R2(t ) .
1(t ) exp
2π R−(t(ϕ) exp
(
(t ))− (2ϕ)R(t()t−)2ϕ. (t )) 2 R(t ) .
p (ϕ (t ) Rp(t()ϕ
R
t )(−t )−ϕexp
(13)
)) (=tp)1(Rϕ)(t(2t)π)) =
(
)
)
R
t
=
1
2
π
R
p (ϕ (t ) R(t )) = 1 2π R(t ) exp − (ϕ (t ) − ϕ (t )) 2 R(t ) .
(13) (13)
In
this
case
the
random
Wiener
phase
filtering
algorithm
will
be
the
following
[2]:
In this case the
random
Wiener
phaseWiener
filteringphase
algorithm
willalgorithm
be the following
[2]:following [2]:
thisthe
case
the random
filtering
will
the
In thisIncase
random
Wiener
phase filtering
algorithm
will be
thebe
following
[2]:
∞
−∞
{
{
{ {
{
4
4
}
}
} }
}
Alexander V. Ryabov, Pavel A. Fedyunin… Modernizing the Global Approximation Method of Posterior Probability…

p (ϕ (t ) R(t )) = 1
{
}
 2
2π R(t )exp − (ϕ (t )− ϕ (t )) 2 R(t ) .
(13)
In this case the random Wiener phase filtering algorithm will be the following [2]:
)
)
(14)
d ϕ d t = 2 A ξ (t )R (t )N 0−1 exp{− R (t ) 2}cos (ω 0 t + ϕ (t ) ) ;
−1 −1
)
) (t ) )
)
)
12 (
−ξ
)
(
)
{
(
)
}
(
d
ϕ
d
t
=
2
A
t
R
t
N
exp
−
R
t
2
cos
ω
t
+
)
)
;
(14)(14)
(
)
(
)
{
(
)
}
(
)
;
d
ϕ
d
t
=
A
ξ
t
R
t
N
exp
−
R
t
2
cos
ω
t
+
ϕ
(
t
)
1
−
)
)
)0 −210}cos
(ωR0t2(+t})cos
(14)
d ϕ d t =) d2 A
t2t) R
Nξ02(A
exp((t{t−)−
R (t exp
ϕ (t )ω) ;0 t)(ω
ϕ ξRd(ϕ
=d(2tt)A=
−
+ ϕ00(t ) ) ;
{(t−0) R{2exp
(}tsin
)R{2−t(}ω)cos
( ϕ2)0}(t(cos
(14)(14)(14)
2(A
Rt(0−t)1ξ)Rexp
N 0 )1R
{Nexp
dR(t ) dt = Nϕ d2ϕ− 2dAt d=
t )ξ (t )N
−(0tR)N
t+) )ϕ. (t )0)t ;+ ϕ (t ) ) ; (15)
0 t +ω
{)0−Rt 2R+(t}(ϕ)sin
(ωϕ)t0(+tt )+ϕ)).ϕ()t()t)).). (15)
dR
dtR=2 2(tN−)ξϕ2(tA
2)NR
− −221(A
R2 (2{t−(−
tN)R
ξ−(1ttexp
N−210}−{1sin
exp
t))(t2()ω)}.sin
(15)(15)
)
(
dR(t ) dt =dRN(ϕtdR
2dt
−()t=2)dtA
exp
ω
)
)
)
)
(
N
t
ξ
−
R
t
t
+
2
1
0
(
(
)
{
}
(
t
=
N
2
−
2
A
R
ξ
t
N
exp
−
2
sin
ω
0 {−0 R (t ) 2}sin (ω t +0ϕ (t )0).
ϕA R (t )ξ (t )N 0 exp
2. The T-distribution
(t ) dt = Nϕ 2ϕ−is2[2]:
dRapproximation
(15)(15)(15)
0
2.T-distribution
The approximation
T-distribution
approximation
[2]:
2. The T-distribution
[2]:{Λ cos
2. The
is
[2]:
2.T-distribution
(t , ϕT-distribution
(Λis) exp
(ϕis−is[2]:
pThe
, α ) = 1 approximation
2approximation
π I 0approximation
mϕ (t ))},
(16)
approximation
[2]:
2. 2. The
The
T-distribution
isis[2]:
(
)
(
)
{
(
(
)
)
}
p
t
,
ϕ
,
α
=
1
2
π
I
Λ
exp
Λ
cos
ϕ
−
m
t
,
(16)
p (t , ϕ , α ) =p 1(t , pϕ2(,tαπ, ϕ
I (Λ ) exp
cos
−{exp
mcos
(exp
))}(ϕ,t ))φ}(t),
π {Iimaginary
ΛI)0 exp
Λ
ϕ}m,−((ϕ
mϕ−}(,tm
2Λ
, Λ(t)} are(16) (16)(16)
(16)
0)π
argument;
where I0(Λ) is a modified zero order
{((0ϕΛΛ )cos
(ϕϕ{(Λt−)()cos
p (tBessel
, ϕ , α ) =function
1) =,0α12)π= 1I2of
=ϕ{m
(16)
0 (Λ
ϕ t ))α
is a modified
zerofunction
order
Bessel
function
of imaginary
argument;
α =(t),
{mΛ(t)}
Iis
0(Λ) zero
φ(t), Λ(t)} are
(Λ)
iswhere
aI0modified
order
Bessel
of
imaginary
argument;
α =argument;
{mφ(t),
are
where I0parameters,
(Λ)
a ismodified
zero
order
Bessel
function
of imaginary
argument;
=Λ(t)}
are are
where
φ{m
- π,
m
+ π].
distribution
≥modified
0;
φ∈[m
a modified
order
Bessel
function
of
imaginary
α{m
={m
Λ(t)}
IΛ(Λ)
φzero
φzero
φ(t),are
order
Bessel
function
of imaginary
argument;
α =α{m
(t),
Λ(t)}
where
Iwhere
0(Λ)I0is
(Λ)0a is
a modified
zero
order
Bessel
function of imaginary
argument;
α φ= (t), Λ(t)} are
where
φ
π,
m
+
π].
distribution
parameters,
Λ
≥
0;
φ∈[m
φ
φ
- ≥π,
mφφ∈[m
+φ π].
distribution
parameters,
Λparameters,
≥ 0;
φ∈[m≥φΛapproaches
-mπ,the
+mπ].
distribution
parameters,
0;
φ∈[m
This
probability
density
function
law of distribution with period 2π
φnormal
-mπ,
distribution
0;
φ
φ + π].
distribution
parameters,
Λ ≥ 0;
φ∈[m
π, m
distribution
parameters,
Λ ≥Λ0;
φ∈[m
φ –
φ + π].
φ - π,
φ + π].
This
probability
density
function
approaches
the
normal
law of
distribution
with
period
This
density
approaches
the
normal
law
ofnormal
distribution
period
2π
This
density
function
approaches
the
normal
of
period
2π2π2π2π
This
probability
density
approaches
law
of distribution
distribution
with
period
recurrence if
the probability
Λ
value
isprobability
high;
ifdensity
Λfunction
→
0,density
it function
converges
into
uniform
probability
This
probability
function
approaches
the
law
ofwith
distribution
with
period
This
probability
function
approaches
the the
normal
lawlaw
ofdensity.
distribution
withwith
period
2π
recurrence
Λ ifvalue
is high;
if Λ
→
0,into
it converges
into uniform
probability
density.
recurrence
ifthe
the
Λthe
value
is
high;
Λ → 0,
converges
intoprobability
uniform
probability
density.
recurrence
if random
the
Λ value
isifΛ
high;
→
itifalgorithm
uniform
density.
recurrence
ifWiener
value
isΛ
high;
Λconverges
→
itit0,converges
into
uniform
probability
Then
the
phase
filtering
may
be
written
in
the
following
waydensity.
[2]:
recurrence
the
Λisvalue
isif0,
high;
if
Λit0,→
it converges
into
uniform
probability
density.
recurrence
if the
Λifvalue
high;
Λif→
0,
converges
into
uniform
probability
density.
Then
the random
Wiener
phase
filtering
algorithm
may
befollowing
written
inway
the following
way [2]:
the
random
Wiener
phase
filtering
algorithm
bebe
written
inin
thethe
following
[2]:[2]:
Then theThen
random
filtering
may
be(tmay
written
inwritten
the
[2]:way
Then
random
filtering
Then
the random
phase
filtering
algorithm
be
written
in following
the(17)
following
[2]:
(t ) phase
(t ) cos
(ωalgorithm
))may
d the
mWiener
t phase
=Wiener
2 A ξWiener
N 0−1filtering
Λalgorithm
; bemay
Then
the
random
phase
algorithm
written
in the
following
wayway
[2]:way
ϕ (t ) dWiener
0t + mϕmay
−1
−
1 t = 2A
−
1
(
)
(
)
(
)
(
(
)
)
d
m
t
d
ξ
t
N
Λ
t
cos
ω
t
+
m
t
;
(17)
−
1
1t0t +
d mϕ (td) dm
cos
m
(17)
(=t )2(tAd)ξt+d(ϕ(=tt2t))2AN
(tΛξ)2N(NAt−−)101ξNsin
(tt)0ϕ))+);(tm
;))ϕ;(ϕt )) ;
(17)
tω)+)0t;m
)Λmcos
dt m
=dA02ξt A=
+
(17)
0(cos
(Λt0Λ)(ωNt((ωt)Λ)0−tcos
((ω
(
tϕ)(0t(cos
ωϕ(0m
(17)
(
ω
+
m
t
(17)
)
)
dΛ(t ) dt = −dNϕmϕ4ϕ(tf)d1ϕ(dΛmt)ϕ=
ξ (t )N
+
,
(18)
ϕ
0
0
ϕ
0
0
1
1 ) + 2 A ξ−(
)N−1(0−tsin
(ω t + m (t )),
ddt
Λ4(t=f) −(dt
=+−24N
4)N
f+1 (−2Λ
(18)
ϕξ(Λ
)
(
(
))sin
dΛ(t ) dt =dΛ
−(N
Λ
A
t
sin
ω
t
,t(+
(18)
(18)
)
(
)
t
N
f
A
ξ
t
N
sin
−
1
ϕ
1
0
0
(Λξ)(t+)N2 A0 ξsin
(0+t1)t(m
N
ωmt0ϕ+(tm)),(ϕt )),
(18)
ϕ 1) 4
where
dΛ(t ) dtdΛ=(t−) Ndtϕ =4ϕ−f1N(Λ
+ 2f1A
ωϕ00t ω
+0m
(18)(18)
ϕ (0t )) , ϕ
where
2
where where
∞
where
where
where
1 Λ(t ) = ∫ (ϕ ∞− mϕ (t )) p (t2, ϕ∞, α ) dϕ 2; 2 2
(19)
∞
∞
2
∞ Λ (t ) =
−∞
(
(
)
)
(
)
1
ϕ
−
m
t
p
t
,
ϕ
,
α
d
ϕ
;
1 Λ(t ) = ∫1(ϕΛ−(1t )m
t , ϕϕ
, α ) ϕdϕ ; , ϕ) d, ϕ
(19)
(19)(19)
(t(∫t)−)(=)ϕmp∫ϕ−−((∫(∞ϕm
1 Λ−∞(t ) = ∫Λ=(ϕϕ
t ))−(tpm)()ϕt (,ptϕ)(),t ,αpϕ)(d,tαϕ
; −α1);dϕ ;
(19)(19)(19)
2
;
(20)
f1 (Λ ) = I1 (Λ ) I 0 (Λ ) 1 − I1 (Λ ) (Λ−∞(t )−I∞0 (Λ−∞)) − [I1 (Λ ) I 0 (Λ )]
−1
−1 )]2 −1 ;
2 [−I1 (Λ ) I2 (Λ
(
)
(
)
(
)
(
)
(
(
)
(
)
)
(20)
f
Λ
=
I
Λ
I
Λ
1
−
I
Λ
Λ
t
I
Λ
−
) f)I1 0(=1Λ
(ΛI)1)(=Λ1I−)1 (1IIΛ10()(of
)Λ))−(t([)ΛII10((t(ΛΛ
) I0 (Λ )] 1 ; −10 2
(20)
f (Λ ) = I1 (fΛ
Λ ) (1(0Λ
Λ−)(It11)(−
I (Λ1Λ
Λ
(20)
(20)
1)(=
I1 (Λ ) is a modified zero1 order
0 I (Λ
Bessel
) Λ(0ΛI)1((t(argument.
) I 0 (Λ )))−I)[0)I(1−Λ0(Λ[)I)1)−(ΛI[0I)1((ΛIΛ0)])(2ΛI)0](;Λ )]; ; (20)(20)
f1 (Λ
I1 (function
Λ ) I 0 (Λ ) 1ΛI−)imaginary
1
I (isΛaa)modified
is a order
modified
zero
order
Bessel
function
of imaginary
argument.
I1 (ΛIf) we
is a 1modified
Bessel
function
of
imaginary
argument.
modified
zero
order
Bessel
function
of
imaginary
argument.
order
Bessel
)theiszero
adependence
modified
zero
order
Bessel
function
of
imaginary
argument.
between
thefunction
filtering
error
variance
and
the signal-to-noise
)I(1Λis(ΛI)1a)(at1isΛmodified
I1 (ΛIlook
zerozero
order
Bessel
function
of imaginary
argument.
IfIfweat
look
at
the
dependence
between
the
filtering
error
variance
and
the
signal-to-noise
ratio for
If
we
look
at
the
dependence
between
the
filtering
error
variance
andsignal-to-noise
the signal-to-noise
If
we
look
the
dependence
between
the
filtering
error
variance
and
the
signal-to-noise
look
at above,
the
dependence
between
filtering
error
variance
the
ratio for the algorithms
mentioned
If we
at the
dependence
between
the
filtering
error
variance
the signal-to-noise
If we we
look
atlook
the
dependence
between
the the
filtering
error
variance
andand
theand
signal-to-noise
the algorithms
mentioned
above,
ratiothe
for the
algorithms
mentioned
above,
ratio for the
algorithms
mentioned
above,
for
mentioned
for algorithms
the
algorithms
(Nϕabove,
qmentioned
= 4above,
A2 above,
⋅ N0 )
(21)
ratioratio
forratio
the algorithms
mentioned
2
2
(
)
q
=
4
A
N
⋅
N
2
)
(21)(21)
q = 4 A 2 q(N
⋅
N
2
= ϕ4 A 0 (qN=ϕ 2⋅q4NA=0 4) (ANϕ (⋅NN 0ϕ⋅)N )0
we will see that for a large signal-to-noise ratio the
) ϕthe 0algorithms will be close to(21) (21)(21)(21)
q = accuracy
4 A (Nϕ ⋅of
N 0all
wefor
will
see for
that
a large
signal-to-noise
ratio
accuracy
oftheall
the
algorithms
will
be close
wethat
will
seeathat
that
for
a for
large
signal-to-noise
the
accuracy
all
close
the to
we will see
large
signal-to-noise
ratio the ratio
accuracy
oftheall
theofof
algorithms
will
be will
closebe
to
we
see
afor
large
signal-to-noise
the
accuracy
all
algorithms
be
close
to
the potential
we
asignal-to-noise
large
signal-to-noise
the
accuracy
ofthe
allalgorithms
the
algorithms
betoto
close
to
we accuracy.
willwill
seewill
thatsee
forthat
a large
ratioratio
theratio
accuracy
of all
the
algorithms
willwill
be will
close
potential
accuracy.
the
potential
accuracy.
the For
potential
accuracy.
potential
accuracy.
small
signal-to-noise
ratio the accuracy of filtering algorithms based upon the
the potential
accuracy.
theathe
potential
For accuracy.
a small
signal-to-noise
ratio the accuracy
of filtering
algorithms
based algorithms
upon the abovementioned
For
a
small
signal-to-noise
ratio
the
accuracy
of filtering
based
upon
the
For aapproximation
small
signal-to-noise
ratio
the
accuracy
oflower
filtering
algorithms
based
upon
the
For
a small
signal-to-noise
the
accuracy
of
filtering
algorithms
based
upon
the the
abovementioned
techniques
will ratio
be ratio
considerably
than
the filtering
potential
accuracy.
For
a
small
signal-to-noise
ratio
the
accuracy
of
algorithms
based
upon
For
a
small
signal-to-noise
the
accuracy
of
filtering
algorithms
the
approximation techniques will be considerably lower than the potential accuracy.based upon
abovementioned
approximation
techniques
will
belower
considerably
lower
thanaccuracy.
the potential
accuracy.
abovementioned
approximation
techniques
will
be
considerably
thanlower
the
potential
abovementioned
approximation
techniques
will
be
considerably
than
the
accuracy.
The
most
accurate
filtering
algorithm
from
the
mentioned
above
can
betheimplemented
abovementioned
approximation
techniques
beones
considerably
lower
than
the
accuracy.
The most
accurate
filtering
algorithm
from
the
mentioned
above
canpotential
be potential
implemented
with
abovementioned
approximation
techniques
willones
be will
considerably
lower
than
potential
accuracy.
The most
accurate
filtering
algorithm
from
the ones
mentioned
above
canimplemented
be implemented
Theof
most
accurate
filtering
algorithm
from
the
ones
mentioned
above
can
beabove
implemented
The
most
accurate
filtering
algorithm
from
the
ones
mentioned
above
can
be
with the help
the
general
T-approximation
(17),
(18)
[2].
Nevertheless,
this
algorithm
is
lengthy
the
help
of
the
general
T-approximation
(17),
(18)
[2].
Nevertheless,
this
algorithm
is
lengthy
and
The
most
accurate
filtering
algorithm
from
the
ones
mentioned
can
be
implemented
The most accurate filtering algorithm from the ones mentioned above can be implemented hard
with
the
help
ofgeneral
the general
T-approximation
(17),
(18)
[2]. this
Nevertheless,
this
algorithm
is lengthy
help
of
the
general
(17),
(18)
[2].
Nevertheless,
algorithm
isisleast
lengthy
tointo
put
into
practice,
whereas
theT-approximation
quasi
optimal
algorithm
of[2].
local
approximation
the
least
accurate
with
the
help
of
the
T-approximation
(17),
(18)
[2].
Nevertheless,
algorithm
is lengthy
and with
hard the
to
put
practice,
whereas
the
quasi
optimal
algorithm
of (18)
local
approximation
is
the
the
ofT-approximation
the
general
(17),
Nevertheless,
this
algorithm
is lengthy
with
thewith
help
of help
the
general
T-approximation
(17),
(18)
[2].
Nevertheless,
thisthis
algorithm
is lengthy
and
hard
tointo
putwhereas
into practice,
whereas
the algorithm
quasi
optimal
algorithm
of local
approximation
is the least
for
the
small
value
domain
of
the
signal-to-noise
ratio
[2].
and hard
to
put
into
practice,
the
quasi
optimal
of local
approximation
is the leastis the
and
hard
to
put
practice,
whereas
quasi
optimal
algorithm
of local
approximation
accurate
for
small
value
ofpractice,
the
signal-to-noise
ratio
[2].
and
todomain
put
into
whereas
theoptimal
quasi
optimal
algorithm
ofapproximation
local
approximation
is least
the least
andthe
hard
to hard
put
into
practice,
whereas
the the
quasi
algorithm
of local
is the least
With
the
signal-to-noise
ratio
q → 0,
in
accordance
with
the
accepted
criterion,
the
value
of the
accurate
for
the
small
value
domain
of
the
signal-to-noise
ratio
[2].
accurate
for
the
small the
value
domain
ofdomain
thedomain
signal-to-noise
ratio
accurate
small
value
of the
ratio
[2].
With
the
signal-to-noise
ratio
qdomain
→
0,
in
with
the[2].
accepted
criterion,
the value of
accurate
for
thevalue
small
value
of signal-to-noise
the signal-to-noise
ratio
[2].
accurate
for for
theunder
small
ofaccordance
the
signal-to-noise
ratio
[2].
variances
consideration
approaches
the
limit
With
the signal-to-noise
ratio
q→
in accordance
withaccepted
the accepted
criterion,
the value
With
the consideration
signal-to-noise
ratio q →ratio
0, in
the accepted
criterion,
the criterion,
value
of value
With
the
signal-to-noise
qaccordance
→qin0,
in0,0,
accordance
the
criterion,
of ofof
the variances
under
approaches
limit
With
the signal-to-noise
ratio
→
inwith
accordance
the
accepted
theofvalue
With
the
signal-to-noise
ratiothe
q→
0,
accordance
withwith
thewith
accepted
criterion,
the the
value
the variances
under
consideration
the limit
) the2approaches
the variances
under
consideration
approaches
limit
variances
under
consideration
the
variances
under
consideration
the limit
lim
M [ϕapproaches
− ϕapproaches
(t )]approaches
=the
π 2 the
3 . limit
(22)
(22)
the the
variances
under
consideration
limit
q →0
)
) lim
2 −)2ϕ (t )]22 = π 2 3 .
)[ϕ
M
(22)
2
2
lim M [ϕ lim
− ϕM
(lim
tq)]→[2M
=
π
3
.
(22)
)
=2 π3 .= π
3. 3.
0ϕ −[ϕ 2(−
ϕ
)]
q →0
limfiltering
M
(t )] t )]
= (πtall
(22)(22)(22)
q →0 [qϕ
→0− ϕerror
When there is no desired signal,
the
of
the algorithms is uniformly
q →0
466 filtering
–
there
isdesired
no desired
signal,–error
the
error
of all algorithms
the algorithms
is uniformly
there
isWhen
desired
the
filtering
of error
all error
the
uniformly
When
there
is no
signal,
filtering
ofalgorithms
all
is uniformly
distributed When
withinWhen
the
(-π,there
π)no
interval.
When
is signal,
no desired
signal,
the
filtering
of the
all
theisalgorithms
is uniformly
isthere
no
desired
signal,
the the
filtering
error
of all
the
algorithms
is uniformly
distributed
within
theπ)(-π,
π) interval.
distributed
within
thewithin
(-π,within
π) interval.
distributed
distributed
the
(-π,interval.
π) interval.
distributed
within
the the
(-π,(-π,
π) interval.
(
(
{
)
(
( ( (
}
{
}
{ { }{ } } }
{
5
5
5
5
5
5
)
)
) ) )
Alexander V. Ryabov, Pavel A. Fedyunin… Modernizing the Global Approximation Method of Posterior Probability…
When there is no desired signal, the filtering error of all the algorithms is uniformly distributed
Furthermore,
theFurthermore,
abovementioned
algorithms
do do
not
meet
thedo
minimal
time
ofminimal
acquiring
thethe
abovementioned
algorithms
not
meet
thethe
time
Furthermore,
abovementioned
algorithms
do
not
meet
minimal
time
ofacquiring
acquiring
Furthermore,
the
abovementioned
algorithms
not
meet
the
minimal
time
of
acquiring
the
abovementioned
algorithms
do
not
meet
the
minimal
time
ofofacquiring
within Furthermore,
the
(-π,
π) interval.
synchronization
requirement
which
is essential
for
developing
phase-locked
loop
systems.
synchronization
requirement
which
essential
for
developing
phase-locked
loop
systems.
synchronization
requirement
which
is essential
for
developing
phase-locked
loop
systems.
synchronization
requirement
which
essential
for
developing
phase-locked
loop
systems.
synchronization
requirement
which
isisessential
for
developing
phase-locked
loop
systems.
Furthermore,
the isabovementioned
algorithms
do
not meet
the
minimal
time
of acquiring
Consequently,
the
analysis
of
the
algorithms
mentioned
above
has
shown
that
they
are
Consequently,
thethe
analysis
algorithms
mentioned
above
has
shown
that
they
areare
Consequently,
analysis
ofthethe
algorithms
mentioned
above
has
shown
that
they
synchronization
requirement
which
isofessential
for developing
phase-locked
loop
systems.
Consequently,
the
analysis
of
the
algorithms
mentioned
above
has
shown
that
they
arethat
Consequently,
the
analysis
of
the
algorithms
mentioned
above
has
shown
they
are
insufficient
forfor
solving
thethe
problem.
Consequently,
the the
analysis
of the algorithms mentioned above has shown that they are insufficient
insufficient
for
solving
thethe
problem.
insufficient
for
solving
problem.
insufficient
solving
problem.
insufficient
for
solving
problem.
for solving the problem.
2. MODERNIZING
THE
APPROXIMATION
METHOD
POSTERIOR
MODERNIZING
THE
GLOBAL
APPROXIMATION
METHOD
OFOF
POSTERIOR
2.
MODERNIZING
THE
GLOBAL
APPROXIMATION
METHOD
POSTERIOR
2. MODERNIZING
THEGLOBAL
GLOBAL
APPROXIMATION
METHODOFmethod
OF
POSTERIOR
2.2.MODERNIZING
THE
GLOBAL
APPROXIMATION
METHOD
OF
POSTERIOR
2. Modernizing
the global
approximation
PROBABILITY
DENSITY
PROBABILITY
DENSITY
PROBABILITY
DENSITY of posterior probability density
PROBABILITY
DENSITY DENSITY
PROBABILITY
In In
order
to
solve
the
problem
of of
filtering
a received
oscillation
phase
with
a small
signal-toorder
tosolve
solve
the
problem
offiltering
filtering
received
oscillation
phase
with
small
signal-toIn
order
to
solve
the
problem
of
filtering
a received
oscillation
phase
with
asignal-to-noise
small
signal-toorder to In
solve
the
filtering
received
phase
with
aphase
small
signal-toInIn
order
to
problem
aareceived
oscillation
with
aasmall
signal-toorder
toproblem
solve
thethe
problem
ofaof
filtering
a oscillation
received
oscillation
phase
with
a small
noise
ratio
it is
appropriate
to use
the
modernized
general
approximation
technique
based
upon
ratio
it
isratio
appropriate
tothe
use
the
modernized
general
approximation
technique
based
using
the
noise
ratio
touse
the
modernized
general
approximation
technique
based
upon
noise
isappropriate
appropriate
touse
use
the
modernized
general
approximation
technique
based
upon
noise
ratio
itnoise
is
appropriate
use
modernized
general
approximation
technique
based
uponupon
ratio
ititisitisto
appropriate
to
the
modernized
general
approximation
technique
based
upon
using
thethe
AWGN.
AWGN.
using
thethe
AWGN.
using
AWGN.
using
AWGN.
using
the
AWGN.
Let us
represent
the
filtering
errorR(t)
variance
R(t)
of the
truth posterior
probability
p(t, φ) density of
LetLet
us us
represent
the
error
variance
ofvariance
the
truth
posterior
probability
p(t,p(t,
φ)
LetLet
usfiltering
thethe
filtering
error
R(t)
ofthe
truth
posterior
probability
p(t,p(t,
usrepresent
represent
filtering
error
variance
R(t)
ofthethe
truth
posterior
probability
represent
the
filtering
error
variance
R(t)
of
the
truth
posterior
probability
φ)
Let
us
represent
the
filtering
error
variance
R(t)
of
truth
posterior
probability
p(t,
φ)φ)φ)
a
random
Wiener
filter
signal
phase
as:
density
of of
a random
Wiener
filter
signal
phase
as:
density
ofaof
arandom
random
Wiener
filter
signal
phase
as:as:
density
a random
Wiener
filter
signal
phase
density
a random
Wiener
filter
signal
phase
as:
density
of
Wiener
filter
signal
phase
as:
2
2
(23)
1 R1(tR) (=t )1=R11(Rt 1) (+t1)3+R
π t )π(,=t )1,= R
(23)
()t1+)(+t3)3+ππ32 2,π, 2 ,
(23)
(23)
(23)
1 R1(t3()R
= 1 R1(1tR
(23)
where
R1(t)
is the
filtering
error
component
of of
approximating
probability
density
where
R1(t)
isiserror
the
filtering
error
variance
component
approximating
posterior
where
R1(t)
the
filtering
error
variance
component
ofposterior
probability
density
where
R1(t)
isvariance
the
filtering
error
variance
component
of
approximating
posterior
probability
density
where
R1(t)
is
the
filtering
variance
component
approximating
posterior
probability
density
where
R1(t)
is
the
filtering
error
variance
component
ofof
approximating
posterior
probability
density
without
considering
the
constant
component.
without
considering
thethe
constant
component.
without
considering
thethe
constant
component.
without
considering
constant
component.
without
considering
constant
component.
without
considering
the
constant
component.
Now
we
search
for
the
approximating
probability
density
inof
thenormal
class ofprobability
normal probability density
Now
we
search
for
the
approximating
probability
density
in in
the
class
Now
we
search
forfor
thethe
approximating
probability
density
thethe
class
normal
probability
Now
we
search
approximating
probability
density
in
class
of
normal
probability
Now we search
forwe
the
approximating
probability
density
the
class
of
probability
Now
search
for
the
approximating
probability
density
ininnormal
the
class
ofofnormal
probability
distribution of type (13) with regard to the Expression in (23).
density
distribution
of of
type
(13)
with
regard
to(13)
the
Expression
in
(23).
density
distribution
type
(13)
with
regard
the
Expression
(23).
density
distribution
of
type
with
regard
to
the
Expression
in
(23).
density
distribution
type
(13)
with
regard
to
the
Expression
in
(23).
density
distribution
ofoftype
(13)
with
regard
totothe
Expression
inin(23).
{{
{{ { } }
2 2
(t 1))(pt=p)()ϕ(=ϕp,Gm
(G
(t−ϕ)−)G
p (ϕp,(m
1mG
t ) (ϕ − mexp
;
,(ϕ
) =GGG
t=)−
π−ϕ m
exp
−
m(t )2()t ))2 ;2 ; (24)
(tG−)1)2(12G
ϕ ,(tm) (Gt )1G
2t(exp
−11(1G
(24)
((,tt(m
)1)t )(G(t2Gt)1π)(1G
)tπ1))()=exp
t()t11)((2tt2)π)(πϕ2exp
t()t1(2)ϕ(tϕ;)−(−ϕmm
ϕ ϕ(t )ϕ) 2 ;
(24)
(24)
(24)
(24)
(25)
(25)
(25)
(25)
(25)
(25)
}} }
2
2
(t 1) (=t )1=R11(Rt 1) (=tG)G
G1G
G=1(G
t ) − 3 π121πR
(t;()t21)=(;=tG)G=(t(G)t −)(−t3)3−ππ32 ;π; 2 ;
1(t(G)t1)=((t=t1))1−=R3R
where
0 ≤0G1(t)
≤ where
∞.
where
0≤≤0G1(t)
G1(t)
∞.
≤ G1(t)
≤ ∞.
where
≤ G1(t)
≤ ∞.
where
00 ≤ G1(t) ≤ ∞.
≤≤∞.
where
Then
using
Functions
(24)
and
(25)
we
seek
the
approximate
received
signal
phase
φ(t)
Then
using
Functions
(24)
and
(25)
wewe
seek
the
approximate
received
signal
phase
φ(t)
Then
using
Functions
(24)
and
(25)
seek
the
approximate
received
signal
phase
φ(t)
Then
using
Functions
(24)
and
(25)
we
seek
the
approximate
received
signal
phase
φ(t)
filtering
Then
using
Functions
(24)
and
(25)
we
seek
the
approximate
received
signal
phase
φ(t)
Then
using
Functions
(24)
and
(25)
we
seek
the
approximate
received
signal
phase
φ(t)
filtering
algorithm:
algorithm:
filtering
algorithm:
filtering
algorithm:
filtering algorithm:
filtering
algorithm:
) 2 2
−1
2
) )
−1G (t ) − 3 π −cos
2 1 −(1ω t + ϕ (t)
(tϕ) (td) t d=t2=Ad2dξmA(mdtξ)ϕ(m
d mdϕ m
N
(26)
1−
0))N
0(G
(
(
)
)
t
d
t
=
2
A
ξ
t
N
G
t
−
t++0ϕt)ϕ+
(26)
(
)
(
)
(
)3πϕπ)32)(t;πcos
t
d
t
=
2
A
ξ
t
N
(26)
(
(
)
) ; (cos
t
G
t
−
3
π
cos
ω
t
)cos
(26)
(26)
)
(
)
(
)
t
d
t
=
2
A
ξ
t
N
G
t
−
3
ω(ω0t(0ω
(t(ϕ
)t)(;)t;) ) ;
(26)
0
ϕ
0
0
0 +−
ϕ
0
( (
) )( ( (
)) )
)
) ) (27)
(t ) (tdt) dt
(tGtdt
))(−=tdt=)3−−π=N3N−π)N2+)2(2G(+G2A(t2(ξG)tA−)((t−ξt)3)N(3−tπ)π3Nsin
(+ω
dGdG
= −=N− NdG
2 (G
t++ϕt)ϕ+
)22(A+tωAξ+2ξt(Aϕt(+)tξ(N)tϕ()N)t)()tN)sin
)sin(sin
)π) +sin
dG(dG
t(2)t )(dt
ω(ωt(ω
(t(ϕ
)t))()t ) ) (27)
ϕ
2 2
ϕ
2 2
ϕϕ ϕ
−1
2−2122 2
0 2
0
0
2
0
−1−1 −1
00 0
00
0
(27)
(27)
(27)
(27)
2 2inverse function of
and
the
Figures
1 and
2 Figures
show
the
ofthe
the
R(t)
= d= dvariance
and
thethe
inverse
function
of of
1and
and
2show
show
the
dependency
of2 the
thethe
R(t)
d2=
and
inverse
function
1dependency
and
2 show
dependency
of
R(t)
dvariance
variance
thevariance
inverse
function
of function
Figures
1 and
2Figures
show
the
dependency
of
the
R(t)
variance
inverse
Figures
1and
2show
the
dependency
of
R(t)
=2=and
dvariance
Figures
1
2
the
dependency
of
the
R(t) = d
andand
thethe
inverse
function of of
the
2
2 stationary
2
2
thethe
1/d1/dvariance
of1/d
a21/d
filtering
error
on
thethe
signal-to-noise
ratio
q obtained
due
toqobtained
thethe
of
a stationary
filtering
error
onon
thethe
signal-to-noise
ratio
due
variance
astationary
stationary
filtering
error
signal-to-noise
ratio
due
21/d
variance
of
avariance
stationary
filtering
error
on
signal-to-noise
ratio
q obtained
due
toobtained
the
variance
aof
filtering
error
on
the
signal-to-noise
ratio
qq obtained
due
toto to
1/d
variance
of a of
stationary
filtering
error
on
the
signal-to-noise
ratio
q obtained
due
to simulation
simulation
modeling
of
algorithms
(4),
(7),
(18)
and
(27).
simulation
modeling
algorithms
(4),
(7),
(18)
and
(27).
simulation
modeling
of
algorithms
(4),
(7),
(18)
and
(27).
simulation modeling
of of
algorithms
(4),
(7),
and
(27).
modeling
algorithms
(4),
(7),(18)
(18)(4),
and
(27).
simulation
modeling
ofof
algorithms
(7),
(18)
and
(27).
In addition, the d2 function corresponds to an exact solution (4), the d12 function corresponds
to the extended Kalman filter (7); the d22 function is a T-approximation (18); the d32 function is the
modernized general approximation (28).
In Fig. (2) and (3) the d2 function and the d32 functions coincide.
As we can see from the simulation results analysis, the modernized technique of approximating to
the general posterior probability density suggested by the authors helps to obtain the random phase φ(t)
filtering algorithm of the received signal that coincides with an accurate solution. Therefore, in order to
solve the problem under consideration, this algorithm (26), (27) can be regarded as an optimal one.
6 6
– 467
–
66 6
Fig. 1. Dependency
of2 variance
the R(t) =ofd2a variance
offiltering
a stationary
error on the ratio q
Fig. 1. Dependency
of the R(t) = d
stationary
errorfiltering
on the signal-to-noise
signal-to-noise ratio q
In addition, the d2 function corresponds to an exact solution (4), the d12 function corresponds
to the extended Kalman filter (7); the d22 function is a T-approximation (18); the d32 function is the
modernized general approximation (28).
In Fig. (2) and (3) the d2 function and the d32 functions coincide.
Fig. 2. Dependency of the inverse function of the 1/d2 variance of a stationary filtering error on the signal-to-noise
ratio q
2
Fig. 2. Dependency of the inverse function of the 1/d variance of a stationary filtering error on the
7
signal-to-noise
ratio q
As we can see from the simulation results analysis, the modernized technique of
approximating to the general posterior probability density suggested by the authors helps to obtain
the random phase φ(t) filtering algorithm of the received signal that coincides with an accurate
solution. Therefore, in order to solve the problem under consideration, this algorithm (26), (27) can
be regarded as an optimal one.
2
solution. Therefore, in order to solve the problem under consideration, this algorithm (26), (27) can
be regarded as an optimal one.
Fig. 2 shows that there is a line coming out from the point 3/π2 corresponding to an accurate
Alexander V. Ryabov, Pavel A. Fedyunin… Modernizing the Global Approximation Method of Posterior Probability…
solution. For the small value domain of the signal-to-noise ratio the dependency graphs of the
2
inverse Fig.
function
of the
variance
a stationary
filtering
error 3/π
on2 the
signal-to-noise
q
2 shows
that1/d
there
is a lineofcoming
out from
the point
corresponding
to anratio
accurate
solution.
small value
domain
thenonlinear
signal-to-noise
obtained
dueFor
to the
algorithms
(7) and
(18) of
have
parts. ratio the dependency graphs of the inverse
2
function
of
the
1/d
variance
of
a
stationary
filtering
error
on the signal-to-noise
q obtained
due to
Consequently, in order to estimate the filtering algorithm
optimality ofratio
a random
signal
algorithms (7) and (18) have nonlinear parts.
phase it is worthwhile using the inverse function, but not the filtering error variance itself. Then a
Consequently, in order to estimate the filtering algorithm optimality of a random signal phase it is
linear function should be the optimality criterion.
worthwhile using the inverse function, but not the filtering error variance itself. Then a linear function
Algorithms
(18) and
(27) differ due to their multipliers with Nφ/2 as part of the first
should
be the optimality
criterion.
summand.
Algorithms (18) and (27) differ due to their multipliers with Nφ /2 as part of the first summand.
We can modernize
modernize the
the extended
extended Kalman
Kalman filter
filter algorithm
We
algorithm represented by Expressions (6)
(6) and
and (7)
(7)in
inthe
thesame
sameway.
way.After
Afterbeing
beingmodernized
modernizedititwill
willlook
lookasasfollows:
follows:
(
)
)
)
d ϕ d t = 2 A ξ (t )N 0−1 G (t ) − 3 π 2 cos(ω0t + ϕ (t ) ) ; (
)
dG(t ) dt = − Nϕ 2 G(t ) − 3 π 2 + A2 N 0 , where G(t) = 1/R(t).
where G(t) = 1/R(t).
2
(28)(28)
(29)
(29)
8
Conclusions
CONCLUSIONS
We We
havehave
suggested
the way
of modernizing
the general
approximation
methodmethod
of posterior
suggested
the way
of modernizing
the general
approximation
of posterior
probability
for a random
received
this modernization
distribution
probability
densitydensity
for a random
received
signal signal
phase, phase,
due to due
this to
modernization
distribution
limits limits
of the function
are into
takenconsideration.
into consideration.
and theand the limit
limit of the function
are taken
The filtering algorithms for a random received signal phase that we have developed due to the
The filtering algorithms for a random received signal phase that we have developed due to
obtained approximation meet the optimality criterion for both high and small signal-to-noise ratio.
the obtained approximation meet the optimality criterion for both high and small signal-to-noise
These algorithms make it possible to implement the phase-locked loop system in High Precision
ratio. Oscillators
These algorithms
it the
possible
implement
the phase-locked
system
in
as well asmake
to gain
time oftoentering into synchronism,
which loop
is vitally
important
for
High Precision
Oscillators
as telecommunication
well as to gain the networks.
time of entering into synchronism, which is vitally
developing
high-speed
important for developing high-speed telecommunication networks.
References
[1] Тихонов В.И., Харисов
В.Н. ЛИТЕРАТУРЫ
Статистический анализ и синтез радиотехнических
СПИСОК
устройств и систем. М.: Радио и связь, 1991, 608с. [Tikhonov V.I., Harisov V.N. Statistic Analysis
and Synthesis of Radio Equipment and Systems. Moscow, Radio and Svyaz, 1991, 608 p. (in Russian)]
[1] Тихонов В.И., Харисов В.Н. Статистический анализ и синтез радиотехнических
Федоров
А.И. Синтез
алгоритмов
нелинейной
фильтрации
на основе
устройств[2] и Харисов
систем. В.Н.,
М.: Радио
и связь,
1991, 608с.
[Tikhonov
V.I., Harisov
V.N. Statistic
интегральной
апостериорной
плотности
Научно-методические
Analysis
and Synthesisаппроксимации
of Radio Equipment
and Systems.
Moscow, вероятности.
Radio and Svyaz,
1991, 608 p.
(in Russian)]
материалы по статистической радиотехнике. М.: ВВИА им. Жуковского, 1985, 161–166 [Harisov
[2] Харисов В.Н., Федоров А.И. Синтез алгоритмов нелинейной фильтрации на основе
V.N., Fedorov A.I. Synthesizing Nonlinear Filtering Algorithms Based on General Approximation
интегральной аппроксимации апостериорной плотности вероятности. Научно-методические
to Posterior
Probability Densityю.
Research and
Study Materials
in Statistical
материалы
по статистической
радиотехнике.
М.:Methodology
ВВИА им. Жуковского,
1985,
161–166 Radio
[Harisov
V.N., Fedorov
A.I.
SynthesizingForce
Nonlinear
Filtering
Algorithms
Based on
General
Technology.
Moscow:
Zhukovsky Air
Engineering
Academy,
1985, 161–166.
(in Russian)]
Approximation
to
Posterior
Probability
Densityю.
Research
and
Methodology
Study
Materials
in and
[3] Abramowitz M, Stegun I.A. Handbook of mathematical functions with formulas, graphs,
Statistical Radio Technology. Moscow: Zhukovsky Air Force Engineering Academy, 1985, 161–
mathematical
166. (in
Russian)] tables. Washington, D.C., 1970. 1048 p.
[4] Рубан А.И.
Дисперсионные
характеристики
статистических
стохастических
[3] Abramowitz
M, Stegun
I.A. Handbook
of mathematical
functions withмоделей
formulas,
graphs,
and mathematical
tables. Washington,
D.C.,и1970.
1048 p. 2009, 2(1), 88–99. [Ruban A.I. Dispersion
объектов. Журнал
СФУ. Техника
технологии,
[4] Рубан А.И.
Дисперсионные
характеристики
статистических
моделей
characteristics of the statistical models of stochastic objects, J. Sib. Fed. Univ. Eng. technol., 2009,
стохастических объектов. Журнал СФУ. Техника и технологии, 2009, 2(1), 88–99. [Ruban A.I.
2(1), 88–99
(in Russian)]
Dispersion
characteristics
of the statistical models of stochastic objects, J. Sib. Fed. Univ. Eng.
technol., 2009, 2(1), 88–99 (in Russian)]
– 469 –