Theoretical Analysis of Accuracy of
Gaussian Belief Propagation
Tokyo Institute of Technology, Japan
Yu Nishiyama and Sumio Watanabe
Background
Belief propagation (BP)
The algorithm which computes
marginal distributions efficiently
Marginal distribution:
pi ( xi ) p( x1 , x2 , , xd ), d 1
x1
xi
xd
requires huge computational cost.
Variety of Research Areas
ex.
(i) Probabilistic inference for AI
(ii) Error correcting code (LDPC, Turbo codes)
noise
000101
correcting
000111
(iii) Code division multiple access (CDMA)
(iv) Probabilistic image processing
degrade
image
restore
restored
image
000101
Properties of BP & Loopy BP (LBP)
Tree-structured target distribution
Exact marginal probabilities
Loop-structured target distribution
Convergence?
Approximate marginal probabilities
Ex.
T. Heskes, ”On the Uniqueness of Loopy Belief Propagation Fixed Points”,
Neural Computation 16(11), 2379-2414, 2004.
Y. Weiss,”Correctness of belief propagation in graphical models with
arbitrary topology”, Neural Computation 13(10), 2173-2200, 2001.
Purpose
We analytically clarify the accuracy of LBP
when the target distribution is a Gaussian
distribution.
What is the conditions for LBP convergence?
How close is the LBP solution to the true marginal
distributions?
In Probabilistic image processing
K. Tanaka, H. Shouno, M. Okada, “Accuracy of the Bethe approximation
for hyperparameter estimation in probabilistic image processing”,
J.phys. A, Math. Gen., vol.37, no.36, pp.8675-8696, 2004.
Table of Contents
・BP & LBP
・Gaussian Distribution
・Main Results
(i) Single Loop
(ii) Graphs with Multi-loops
・Conclusion
Graphical Models
Target distribution
1
p(x) Wij ( xi , x j )
Z {ij}B
Marginal distribution
1
p3 ( x3 ) Wij ( xi , x j )
x \ x3 Z {ij}B
p35 ( x3 , x5 )
x \ x3 , x5
1
Wij ( xi , x j )
Z {ij}B
x2
W12
x1
x6
W23
W34
x4
x3
W35
W56
x5
BP & LBP
Marginal distribution
pi ( xi )
M
k i
( xi )
k N ( i )
N (i )
pij ( xi , x j ) M k i ( xi ) Wij M k j ( x j )
kN (i ) \ j
kN ( j ) \i
N (i )
N ( j)
xk1
xi
xk1
xk1
xi
xk 2
Wij
xj
How are messages M i j ( x j ) decided?
Messages M i j ( x j ) are decided by the fixed-points
of a message update rule:
M i(t 1j ) ( x j ) Wij ( xi , x j )
xi
x1
( t1))
(t
4
1
41
) 1)
((tt
14
41
MM
M
M
) 1) ( t ()t 1)
M 12(t
MM
21 1
2
21
((tt1)))
(
t
M
M2
22
M
44
4
)) 1)
M
M2(42(tt
424
x4
((tt)) 1()t ( t1))
M4344
MM
43
343
(t )
M
k i ( xi ).
kN ( i ) \{ j }
a fixed-point
x2
If it converges
(((ttt))1)
233
322
((tt)1)
32
23
M
M
M
x3
x1
M
*
1 4
*
M 4
1
*
*
M 1
2 M 21
*
M 2
4
M
x4
*
M 4
3
x2
*
M 2
3
*
M 3
2
*
4 2
x3
*
M 3
4
Gaussian Distribution
Target distribution: N (0, S ) (S : Inverse covariance matrix)
xl
(t(t) )
M
l i
li
xi
xm
M(ti)i(tj) j
M
(tj(tj))i i
xj
)
(tm
i(t )
M m i
t)
Messages: M i(t ) j ( x j ) N (0, (i
j)
sij2
(t )
(tt11) )( x )~
Update rule: Mi(
W
(
x
,
x
)
M
ij
i
j
k i ( xi )
i j j j sii
(
t)
~
x s
kNk (i)i\{ j}
ii
i
kN ( i ) \{ j }
Fixed-Points of Messages
x1
Theorem1
When a Gaussian distribution
forms a single loop, the fixed-points
of messages are given by
M i*i 1 ( xi 1 ) N (0, *i i 1 ), *i i 1
M i*i 1 ( xi 1 ) N (0, *i i 1 ), *i i 1
xd
x2
x3
Single loop
si ,i 1 i ,i 1 si 1,i 2 i 1,i 2 D
2 i 1,i 1
si ,i 1 i ,i 1 si 1,i 2 i 1,i 2 D
D (det S ) 2 (1) d 4s12 s23 sd 1 det S ,
where {i , j } are the cofactors.
2 i 1,i 1
,
,
LBP Solution
Theorem 2
The solution of LBP is given by
pi* N (0, *i ),
pi*,i 1 N (0, Si*,i 1 ),
where
Ei ,i 1
d
(
1
)
4s12 s23 sd 1
det
S
*
i
1
,
ii
det S
Ei ,i 1
i ,i
*
Si ,i 1
si ,i 1
det S D
si ,i 1 i ,i 1.
2
si ,i 1
,
Ei ,i 1
i 1,i 1
Intuitive Understanding
LBP Solution
True
d
(
1
)
4s12 s23 sd 1
det
S
*
i
1
ii
det S
x1
s41
x4
s12
s34
x2
s23
s23 0
x3
Loop
Loopy Belief Propagation
det S
ii
x1
s41
x4
s12
s34
x2
x3
Tree
Belief Propagation
Accuracy of LBP
Theorem 3
The Kullback-Leibler (KL) distances are calculated as
1 1
1
KL( pi || p )
1 log 1 ,
2 2
2
*
i
True marginal
density
i {1, , d },
Solution of LBP
where is given by
(1) d 4s12 s23 sd 1
.
det S
Convergence condition is 1 since *i det S 1 .
ii
Graphs with Multi-Loops
How about the graphs
having arbitrary structures?
x6
x3
x4
We clarify the LBP solution
at small covariances.
x1
x2
x5
Multi-loops
We derive the expansions w. r. t. s , where inverse
covariance matrix is S S diagonal sS off diagonal.
A Fixed-Point of Inverse Variances
Theorem 4
A fixed-point of inverse variances satisfies
the following system of equations:
2s
| N i | 2 ii
i
1
jN i
4s 2 s 2ji
j i
,
i {1,, d }.
The solution of the system is expanded as
i ( s) sii
d
s 2ji
j 1( i )
s jj
s 2 O( s 4 ), i {1,, d }.
Comparison with true inverse variances
Expansions of LBP solution are
i ( s) sii
d
s 2ji
j 1( i )
s jj
s 2 O( s 4 ), i {1,, d }.
True inverse variances are
d
s 2ji 2
det S
sii
s i s 3 O( s 4 ),
ii
j 1( i ) s jj
i
sii
[tr ( S d1S o ) 3 tr{( S d ) ii1 ( S o ) ii }3 ],
3
i {1,, d }.
Accuracy of LBP
Theorem 5
The Kullback-Leibler (KL) distances are expanded as
KL( pi || pi* )
True marginal
density
i2
2
ii
4s
s 6 O( s 7 ),
i {1, , d },
Solution of LBP
where { i } are
i
sii
[tr ( S d1S o ) 3 tr{( S d ) ii1 ( S o ) ii }3 ], i {1,, d }.
3
Conclusion
We analytically clarified the accuracy of
LBP in a Gaussian distribution.
(i) For a single loop, we revealed the parameter that
determines the accuracy of LBP and the condition that
tells us when LBP converges.
(ii) For arbitrary structures, we revealed the expansions of
LBP solution at small covariances and the accuracy.
These fundamental results contribute to understanding
the theoretical properties underlying LBP.
© Copyright 2026 Paperzz