THE INDICATION OF THE MOMENT INDEPENDENTMEASURE δAND ITS
NEW CALCULATIONAL METHOD
Q. Liu, T. Homma
Nuclear Safety Research Center, Japan Atomic Energy Agency, Japan
{liu.qiao, homma.toshimitsu}@jaea.go.jp
1.
Introduction
In risk assessment problems, the uncertainties of input parameters are transferred through the model to the
output, and bring about the issue of output uncertainty. The goal of sensitivity analysis (SA) is to quantify the
relative importance of each input parameter [1]. Iman and Hora [2] proposed that an ideal SA measure should be
easy to interpret, easy to compute, and robust. Saltelli[3] pointed out that a SA technique should be global,
quantitative and model free. Borgonovo [4] further extended Saltelli’s requirements for a SA measure by adding
the fourth feature, moment independence.
In [4] Borgonovo proposed a SA indicator δi, which estimates the influence of the entire input distribution
on the entire output distribution. It does not refer to any particular moment of the output. For the readers’
convenience, the derivation of δi is briefly described here. Let fY(y) be the unconditional probability density
function (PDF) of the model output Y, and fY|Xi(Y) be the conditional PDF of Y, given a value (e.g., xi(1)) of an
input parameter Xi. The shift between fY(y)and fY|Xi(y) is measured by the total area s(Xi), which is surrounded by
these two curves (Fig. 1). It is given by
PDF
(1)
s ( X i ) | fY ( y ) fY | X ( y ) |dy
i
Let the value of Xi changes over its distribution, the expected shift is
(2)
E X [ s( X i )] f X ( xi ) s ( X i )dxi
i
fY|Xi(y)
fY(y)
i
The SA indicator δi is defined as
s(Xi)
(3)
i 12 EXi[s( X i )]
0
Y
δi has the property of 0≤δi ≤1 [4].
Fig. 1 Shift of fY|Xi(y) from fY(y)
From Fig. 1, it is known that the essence of δi (more precisely, s(Xi)) is to
is measured by sXi.
measure the difference of two PDFs by estimating the area surrounded by them.
Obviously it is neither the difference of the means (medians), nor the difference of the variance between the two
distributions. Then what is it? Is there any relationship with other statistics? Let us come to probe this issue.
Indication of δi
Let F(y) and f(y) be the CDF (Cumulative Distribution Function) and PDF of an output of interest Y,
respectively. It is known from statistic textbooks that
y
PDF
(4)
F ( y)
f (u )du
2.
F(y) is non-negative and monotonically increasing (0≤F(y)≤1).
Let f1(y) and f2(y) be two density distributions and suppose they are
intersected, as shown in Fig. 2(a). The area surrounded by the two curves is
given by the integral
(5)
s | f 2 ( y ) f1 ( y ) |dx s1 s2 s3 s4
Where s1, s2, s3 and s4 are the surrounded areas for y∊ (-∞, a], y∊ (a, b], y∊ (b,
c] and y∊ (c, +∞], respectively.
For y∊ (-∞, a], because f1(y)-f2(y)≥0, we have
a
(6)
s
[ f ( y) f ( y)] dy F (a) F (a)
1
1
2
1
2
s2
f1(y)
f2(y)
s3
s1
s4
a
b
c
Y
Fig. 2(a) f1(y) and f2(y).
CDF
1.0
D3
F2(y)
D2
Correspondingly, we have
s2 [ F1 (a) F2 (a)] [ F2 (b) F1 (b)]
s3 [ F1 (c) F2 (c)] [ F2 (b) F1 (b)]
s4 F1 (c) F2 (c)
(9)
(7)
(8)
F1(y)
D1
a
b
c
Y
Fig. 2(b) F1(y) and F2(y).
Finally, we obtain
s 2 {[ F1 (a) F2 (a)] [ F2 (b) F1 (b)] [ F1 (c) F2 (c)]}
(10)
From Fig. 2(b) we know D1= F1(a)-F2(a), D2= F2(b)-F1(b) and D3= F1(c)-F2(c). Therefore, the surrounded
area s between two PDFs can be converted to the distance between their CDFS, and it is equal to two times of
D1+D2+D3. Since 0≤F(y)≤1, we have 0≤D1+D2+D3≤1. This relationship holds for any kinds of intersection
between two PDFs. Hence we derive the property of the area s: s∊ [0, 2].
From the above discussion, we know that the area surrounded by two PDFs can be converted to the distance
between their CDFs. This remains us of the Kolmogorov-Smirnov test, in which the difference of two CDFs is
measured by the statistic, the greatest vertical distance between them. It has been pointed out that the two-sided
Kolmogorov-Smirnov is consistent against all types of differences (e.g., differences in means, medians, variance)
that may exist between two distributions [5]. When there is only one intersection between two PDFs (i.e., no
intersection between their CDFs), the surrounded area s between two PDFs is just two times of the greatest
vertical distance between two distributions. In this case, the area s is equivalent to two times of the statistic used
in Kolmogorov-Smirnov test. For other cases, the area s surrounded between two PDFs can be regarded as the
refinement of the statistic used in Kolmogorov-Smirnov test. This can be the indication of δi.
Now let us prove the property of δi. For the surrounded area s(Xi)(in Eq. (1)), since 0≤s(Xi)≤2 holds for any
give value of Xi, we get 0≤ EXi(s(Xi))≤2. Because δi is equal to one half of EXi(s(Xi)), we have 0≤δi ≤1. So far, the
property of δi is proved in a way different from that in [4].
A new calculational method for δi
Based on the discussion in Section 2(refer to Fig. 2), it is known that the area surrounded by two PDFs
(assume n points of intersection between them) is equivalent to two times of the sum of the vertical distances
between their CDFs when the values of Y are the same as those of all the n intersection points. Therefore,
different from Borgonovo’s approach [4], a new calculational method for δi was proposed.
Suppose that the unconditional PDF fY(y) and the conditional PDF fY|Xi=xi (y)(given a random value xi(1) of
the input parameter Xi) can be analytically derived. Based on Eq. (4), we get their CDFs FY(y) and FY|Xi=xi (y),
respectively. Since the relationship fY(y)-fY|Xi=xi (y)=0 happens at the points of intersection, we can obtain the
values of Y at these points of intersection, assume them to be a1, a2, ..., am. Thus the area surrounded by the two
PDFs is equal to
s(xi(1)) = 2×[(|FY(a1)- FY|Xi=xi (a1)|+|FY(a2)-FY|Xi=xi (a2)|)+…+|FY(am)-FY|Xi=xi (am)|]
(11)
3.
(1)
(1)
(1)
(1)
(1)
(1)
We then generate a second value xi(2)for Xi, derive fY|Xi=xi (y) and FY|Xi=xi (y), and get s(xi(2)). Repeating the
above steps for the total sampling size, we can finally estimate
(2)
(2)
1
1 n
(12)
2
2 k 1
If fY(y) and fY|Xi)(y) can only be obtained empirically, Monte Carlo method will be suitable to get them.
Firstly, the empirical CDF FY(y) is obtained [6]:
n
(13)
i E Xi [ s( X i )] [ s( xi( k ) ) / n]
FY ( y ) T ( y y k ) / n
k 1
1, if y y k
T ( y yk )
0, if y y k
Where n is the sample size and k is the sample index.
According to statistics textbooks, we get the PDF
fY ( y)
dF ( y )
F ( y ) F ( y )
lim
y 0 y
dy
y
(14)
(15)
In the same way, we can get the empirical FY|Xi(y) and fY|Xi(y). Then we can repeat the above procedures to
find the points of intersection of fY(y) and fY|Xi(y), calculate s(xi(1))(then s(xi(2)), …). Finally we can estimate δi.
4.
Concluding remarks
In this work, the moment-independent SA measure δi is analyzed. It is demonstrated that the area surrounded
by two PDFs is equivalent to two times of the sum of the vertical distances between their corresponding CDFs
when the values of output of interest are the same as those of all the intersection points of the two PDFs. It can
be regarded as the refinement of the Kolmogorov-Smirnov test statistic. Further, a new calculational method for
δi is proposed. Improvement of this method is under way.
References
[1] Homma T, Saltelli A: Importance measures in global sensitivity analysis on nonlinear models. Reliab Eng
Sys Saf 52, 1-17(1996)
[2] Iman RL, Hora SC: A robust measure of uncertainty importance for use in fault tree system analysis. Risk
Analy 10, 401-406(1990).
[3] Saltelli A: Sensitivity analysis for importance assessment. Risk Analy 22, 579-590(2002)
[4] Borgonovo: A new uncertainty importance measure. Reliab Eng Sys Saf, online(2006)
[5] Conover WJ: Practical nonparametric statistics. 2nd Edition, John Wiley&Sons, New York, 1980.
[6] Chun MH, Han SJ, Tak NI: An uncertainty importance measure using a distance metric for the change in a
cumulative distribution function. Relia Eng Sys Saf 70, 313-321(2000)
© Copyright 2026 Paperzz