A Monte Carlo estimation of the marginal distributions in a problem

Reliability Engineering and System St~fety 52 (1996) 65-75
ELSEVIER
0951-8320(95)00092-5
© 1996 Elsevier Science Limited
Printed in Northern Ireland. All rights reserved
0951-8320]96/$15.IX)
A Monte Carlo estimation of the marginal
distributions in a problem of probabilistic
dynamics
P. E. Labeau*
Service de Mdtrologie NuelOaire, Universit~ Libre de Bruxelles, Avenue F.D. Roosevelt, 50 1050 Bruxelles, Belgium
(Received 2 May 1995; revised 3 July 1995; accepted 30 August 1995)
Modelling the effect of the dynamic behaviour of a system on its PSA study
leads, in a Markovian framework, to a development at first order of the
Chapman-Kolmogorov equation, whose solutions are the probability densities
of the problem. Because of its size, there is no hope of solving directly these
equations in realistic circumstances. We present in this paper a biased
simulation giving the marginals and compare different ways of speeding up the
integration of the equations of the dynamics. © 1996 Elsevier Science Limited.
1 INTRODUCTION
2 PROBABILISTIC DYNAMICS
Recently, researchers in reliability have stressed the
need to incorporate in a usual safety study the
dynamic behaviour of a system and its influence on
the transitions likely to occur in an accidental
transient. I-4 Indeed, each reachable c o m p o n e n t state
corresponds to a given dynamics, and, in turn, the
transitions between states depend on the transition
rates which are influenced by the evolution of the
physical variables describing the system.
This feedback is quantified by the probability
densities of being in a given state at a given time, with
given values of the physical variables. Estimating
these distributions is far from being an easy
computational task, since we are faced with
high-dimensional problems in realistic circumstances.
We shall first remind readers how this dynamic
concept has been modelled in the Markovian
assumption, as well as mentioning some previous
attempts to deal with it. We shall then present a
general way of simulating the behaviour of the system.
We will then show how to bias this algorithm and
apply it on a benchmark. Different techniques for
accelerating the integration of the equations of the
dynamics will be given and c o m p a r e d on the same
benchmark. A m o r e complicated application on a
nuclear reactor transient will then be solved. Finally,
we will give some concluding remarks.
Let us consider how a system starting from state i
behaves. The physical variables will evolve deterministically, according to the dynamics in state i
d.f = f-(x- )
(1)
from time t = 0 to t = tj, where a stochastic transition
occurs. The system then changes its state to j, in which
a similar deterministic evolution will take place until
the next transition.
To quantify this deterministic-stochastic process, we
introduce lr(i, £, t), density probability that the system
is in state i at time t with physical variables £, for
given initial conditions. It has been shown 5 that
rc(i,Y,t) obeys a d e v e l o p m e n t at first order of the
C h a p m a n - K o l m o g o r o v equation, if we assume a
Markovian f r a m e w o r k
O~( i,£,t )
- + div,~f~(Y);c(i,£,t)) + Ai(Y)Ir(i,Y,t)
Ot
= ~p(j--->il£)rc(j,Y,t )
(2)
j~i
where A i ( £ ) a n d p ( j - - - ~ i l £ ) are the rate of transition
out of state i and the transition rate from state j to
state i, respectively.
To solve system (2), the usual numerical schemes
fail to work for realistic problems, since a great
n u m b e r of states have to be considered and enough
variables have to be taken in order to model the
*Research Assistant (National Fund for Scientific Research,
Belgium.
65
P. E. Labeau
66
system as neatly as possible. Therefore, alternate
routes have been tested: simulation of functionals of
the distributions by a Monte Carlo algorithm, 6'7
synthesis of the distributions from their moments, s9 ...
We will now describe an algorithm to simulate the
marginal distributions of such a problem.
3 ANALOGUE
MARGINALS
SIMULATION
OF T H E
Looking back to the description of the behaviour of
the system given in Section 2, one can easily find how
to simulate it. Since its evolution is m a d e of a
succession of deterministic walks in one state and of
stochastic transitions, a history consists, after having
sampled the initial state, of repeating the two
following steps until the mission time is reached: m
1. sample the time of the next transition from an
exponential c.d.f, and integrate the equations of
the dynamics up to this instant;
2. sample the new state from the probabilities
p(i--->j]£) that the next state will be j if the
,~(~)
system leaves state i in £.
T o obtain the marginal distributions, one simply has
to divide the range of each variable in n~ intervals, and
to associate a counter c(I, m) to the m t h interval of
t h e / t h variable, for each state and each time rk one is
interested in. When the system reaches one rk, in state
i at point £, the corresponding counters are increased
by one. The distributions are then obtained by
averaging the results on all the histories.
One can notice that the total distributions could be
obtained in the same way, but for a high n u m b e r of
variables, the amount of counters would quickly
b e c o m e too large. However, the simulation of
bimarginal distributions does not lead to this problem
and gives interesting correlations between important
variables.
If much information can be obtained in one
simulation, this algorithm presents an important
weakness: some states have a very low probability,
and the tails of the distributions in these states
correspond to very rare events. These are unlikely to
occur in our analogue game, except if the n u m b e r of
histories is considerably increased, leading to prohibitively large computational times. We must thus look
for a way of biasing the simulation.
4 I M P R O V E M E N T OF T H E S I M U L A T I O N
USING BIASING TECHNIQUES
problem leads to m a n y useless histories, and therefore
to an important waste of computation time. If the
probabilistic laws can be modified in order to increase
the occurence rate of the interesting events, the
statistical accuracy of the results will be considerably
improved for a given n u m b e r of histories. In order to
ensure the unbiasedness of the estimation, correcting
factors, called statistical weights and defined as the
ratio of the analogue probability to the modified one,
have to be used: contributions to the score have to be
multiplied by the current weight of the history. ~l~z
Let us go back to the simulation of the marginals. It
is well-known that the m o r e precise the purpose of the
Monte Carlo estimation is, the more effective the
calculation is. We restrict ourselves to the computation of all the marginals, in one state i* at one time T.
Therefore, the biased scheme must aim at allowing
only transitions to states from which state i* can be
reached in a finite n u m b e r of steps and forcing the
system to reach state i* before T. In the following, we
develop this general idea, assuming the system is
non-repairable. For the sake of simplicity, the
transition rates are taken as constant, even though the
same treatment is still applicable in the general case.
From the state graph, we can find from which states
we can start a history that could reach state i*. Let I
be the set of these states, and w0 = ~ic~c(i, 0), where
7r(i, 0) is the initial probability of being in state i. If we
sample the initial state from set I without modifying
the initial distribution, we introduce a weight w0.
Again from the state graph, we can determine for
state i of I which of its first successors are part of a
path leading to i*. Let succ(i) be the set of these
states. When a transition out of state i occurs, we
force the new state to be part of succ(i). If we do not
change the transition rates and if A* = ~j~,,,,~i~p(i--~
Ai
j), another weight w,(i)= AT has to be taken into
account. This step is repeated for all transitions until
state i* is reached.
Let us illustrate this scheme on the simple state
graph given in Fig. 1. If we are interested in the
distributions in state 4, it is useless to start histories
from states 3 and 5. Therefore 1 : {1,2,4}. It is also
easily seen that succ(1) = {2, 4} and succ(2) - {4}.
BY
~
J
3
If one is interested in the simulation of very rare
events, sampling from the probabilistic laws of the
Fig. 1.
A M o n t e Carlo estimation o f the marginal distributions in a p r o b l e m ofprobabilistic dynamics
It has to be noticed that, even with this selection of
the useful paths, the drawback of the analogue
simulation can still be encountered. Indeed, the
probabilities to sample these paths can have very
different orders of magnitude. In this circumstance,
some parts of the distributions could not be accurate;
biasing the transition rates to the different authorized
states, for a given transition rate out of the current
state, would solve this problem, a second weight
having to be introduced.
U p to now, we have selected the histories leading to
state i*. But we still have to reach it before time T. To
do so, we force all the transitions to happen before T.
Let j ( k ) be the kth state of a path leading to i* and
tick) the time spent in it. In the Markovian hypothesis,
the analog c.d.f, of the time of the transition out of
state j ( k ) is exponential: Fk(tj(k)) = 1 - - e ~,~,,k~. If
k--I
t = 2~ tj(t) is the time elapsed from the beginning
/--I
of the history to the transition to state j ( k ) , then the
c.d.f, forcing the next transition to take place before T
is
F~ti(k)) =
I
1 - e Atl'~)li(k)
- - e a,k,(T-~,) if tj(k) < T - t
otherwise.
67
Since f b,~7r(i*, xt, T)dxt = tr(i*, T), we can deduce:
ni c ( l , m )
zc(i*,x,,,,,T) ~ - b ~ - a~ nh
(6)
and we have thus achieved our goal.
T o build this algorithm, we made the assumption
that the system was not repairable. As a consequence,
we supposed that when state i* was reached, the
system had to evolve in it until time T to contribute to
the score. It may not be true in the repairable case,
when a state can be reached after an infinite n u m b e r
of transitions and after having already been in this
state previously. However, a score due to m a n y
transitions will be very small, and could even become
smaller than the statistical error on the estimation.
Therefore, we can adopt the following rule-of-thumb.
Let K ....... be the maximal n u m b e r of transitions
considered when reparation is active. If state i* is
reached in less than K, ...... steps, then the probability of
staying in i* till time T is calculated and a r a n d o m
n u m b e r decides whether or not the system will keep
evolving in this state until time T. If K ....... transitions
have already taken place, the history is brought to
state i* in as few transitions as possible.
(3)
The weight associated to this bias is w f ( j ( k ) ) =
1-e
Ai,~,(7 t).
5 APPLICATION
So state i* is reached before T. But the system must
not leave it before T. We thus want the next transition
to occur after a time t + ti. > T, i.e., t* corresponds
to the conditional c.d.f.:
Consider a two-dimensional three-state benchmark,
whose dynamics are:
dx
dtt = a i x
fea,(T-t)(1-e
[0 otherwise
E.(t,.)
a,h.)ifti.> T-t
, x(O) = xo,y(O)
(4)
and to the new weight w * = e -~'~r-'). The total
weight of an history reaching i* in K transitions is:
dy
= y , , , i = 1...3
(7)
= bix + ci y
K
W = wow* ~
w,(j(k))wr(j(k)).
(5)
k-I
The evolution of the physical variables is calculated
throughout the history, and we only have to determine
at its end which counters c(l, m ) have to be increased
by W. If nh histories are played, and if Wh is the final
weight of the hth history, then the m e a n weight
(W) = 1 ~
Its state graph is given in Fig. 2. The distributions in
state 39 are given in the Appendix. With the data of
Table 1, we obtain:
3 (x-5/3e'/2 tc(3,x,t) =
1l/Seg'/m)
x
3 (x 5/3e'/2 - x-7/5e t/l°)
e 3t/4 ~ x ~ e t
(8)
e t ~ x ~ e 3t/2
Wh is an unbiased estimate of ]r(i*,T), the
nh h- I
probability of being in state i* at time T.
Let [a~, b~] be the range of the lth variable and
xt,, = a~ + m -
--,
0
otherwise
m = 1...n~ the centres of the
Y/i
n~ intervals to which are associated the counters
c ( l , m ) . These values are proportional to ~(i*, x~,,,
T), the value in x~,,, of the lth marginal distribution in
state i* at time T.
1
3
Fig. 2.
3
68
P. E. Labeau
0.01 4
Table 1
Monte Codo eimulotion
'~'
"
"
n
0.012
a, = 1.5
a~= 1.0
a3 = 0.75
At = 0 . 5
b] = 1.0
C1 0.5
b2 = 0.75
C2 = 0.25
b3 = 0.0
C~= 0.5
A2 = 0.3
X~,= y(, = 1.0
0.01 0
=
0.008
cx
and
3 ( y N2etl4 --
~15e3'11°)
]'C(3,y,t) z~N(y-3/2et/4
1,0
y
e t12 <--
ct~y~
7/5ct/lO)
0.004
<-- e ~
Y
3t/2
0.006
0.002
( 9 )
0.000
otherwise
1.04
if ~ ( i , 0 ) = 6i~. T h e s e distributions are n o r m a l i z e d to
n(3,t), the probability of being in state 3 at time t.
Figures 3 - 6 c o m p a r e the theoretical and simulated
distributions, for T = 0.1 and T =2. 5 × 104 histories
have b e e n played, and the ranges of both variables
were divided into 40 intervals. F o r a given n u m b e r of
histories, there is a balance to find b e t w e e n the
n u m b e r of intervals and the statistical accuracy of the
results, but this choice has no influence on the
c o m p u t a t i o n time.
In o r d e r to estimate the quality of the biased game,
an a n a l o g u e simulation was p e r f o r m e d for T = 0 . 1 .
T h e c o m p u t a t i o n time n e e d e d for this case was m u c h
smaller, since a history is finished without any score
w h e n a transition time greater than T is sampled. T h e
results are completely different to the theoretical
solutions, since very few histories are useful. A second
a n a l o g u e simulation with 1.5 × 106 histories has been
realized, leading to a similar c o m p u t a t i o n time as for
the biased m e t h o d with 5 × 104 histories. O n c e again,
the distributions o b t a i n e d in this application are much
less accurate. Figures 7 and 8 c o m p a r e the results of
these different tests, while Figs 9 and 10 give the
s t a n d a r d deviations of the estimates. The superiority
of the n o n a n a l o g u e g a m e is obvious.
6 ACCELERATION
OF T H E I N T E G R A T I O N
For each history, the evolution of the physical
variables has to be c o m p u t e d . Obviously, any effort to
speed up the integration of the equations of the
1.06
1.08
1,10
1.12
1,16
1.18
Fig. 4. Marginal distribution of the y variable in state 3
t 0.1: integration with RK4.
d y n a m i c s will positively affect the c o m p u t a t i o n time. 13
W e have tested different ways of achieving this
p u r p o s e and r e p o r t the results in the following.
T h e first idea is very simple. Since the range of each
variable is divided into intervals, the accuracy on the
solution of the equations of the dynamics must not be
very high. So one can think of replacing the usual
R K 4 scheme 7 by a R K 2 which, obtaining the solution
in two steps rather than four, will be twice as fast.
Notice that, even for the estimation of o t h e r reliability
characteristics, such as m e a n time to failure,
generalized nonreliability, etc, there is no n e e d of a
very precise integration scheme, and using a m e t h o d
of o r d e r 2 in place of a m e t h o d of o r d e r 4 is not a
problem.
T h e second trick is based on the following
constatation. C o n s i d e r a batch of rn histories starting
from the same state, with the same initial values of the
variables. Let tl < r e < . . . <t,,, be the first transition
times of each of these runs. It m e a n s that the
integration of the equations giving the variables
b e t w e e n 0 and t, will be p e r f o r m e d m times, the one
b e t w e e n tL and t2 ( m - 1 )
times, and so on. This
repetition of calculations results in a possibly
i m p o r t a n t waste of c o m p u t a t i o n time. This weakness
can be o v e r c o m e by realizing a preliminary stage
0.04
0.020
0.016
1.14
Y
--
- - Monte Cork) i;;mulot;on
, --" At~olyti¢ol solut;on
--,
0.016
Monte Cork) slmulot;on
Solut;on
Anolyt;¢OI
0.03
0.014
0.012
x
,4
"~
•
".
0.010
FI
0.02
o.
0.008
0.006
0.01
0.004
0.002
0.000
o.oo
i
1.07
,
i
1.08
,
|
1.09
.
•
1.10
.
,
1.11
-
,
1.12
-
,
1.13
-
•
1,14
-
,
1.15
-
,
1.16
-
,
1.17
*
|
i
4
6
8
,
!
10
.
.
.
12
.
.
14
.
16
.
.
18
_
.
20
.
I
22
x
X
Fig. 3. Marginal distribution of the x variable in state 3
t = 0. l: integration with RK4.
Fig. 5. Marginal distribution of the x variable in state 3
t = 2: integration with RK4.
A Monte Carlo estimation o f the marginal distributions in a problem ofprobabilistic dynamics
0.030
-.-
•
Monte
Carlo
simulotion
Anolytl¢ OI solution
69
0.04
~
0.03
::
o n o l o g u e : .50000 h i s t o r i e s
. . . . onologue : 1~ ) 0 0 0 0 histories
- - - biosed : 5 0 0 0 0 h i s t o r i e s
:
0.02
:
theoretical
....
0.025
0.020
-.c:
>"
0.01 5
>'.
w3
~•
Q.
::
.
0.010
o.oo°"°1
0.005
0.000
:
~
II
J,
.3
.
,
5
_
,
.
|
7
.
g
,
.
11
i
.
,
13
.
15
o
17
.
i
19
.
l
1.04
~::
i:
i
21
:
,
•
1.06
C)8
.
::
'
I,
I
,
I
1.10
,
1,12
I
,
1.14
i
,
1,16
i
1.18
Y
Y
Fig. 6. Marginal distribution of the y variable in state 3
t = 2: integration with RK4.
before the simulation: the evolution of the variables in
all states of I whose initial probabilities are non zero is
calculated throughout the mission time, and the
results are m e m o r i z e d at some predefined times.
Then, when the first transition time r is sampled in a
new history, one can easily find in which time interval
it lies, and only compute the values of the variables
from the beginning of this interval to r, avoiding
useless calculations. To p e r f o r m this memorization,
we assume that the initial values of the variables are
perfectly known. D o e s it correspond to a realistic
circumstance? An important application of probabilistic dynamics is to quantify the behaviour of a system,
when an 'accident' at t = 0 induces a sequence of
possibly dangerous events. If the regulation of the
system is assumed reliable before the accident, the
dispersion about the initial values of the variables can
be neglected. Memorizing the evolution of the
variables in each interesting state is therefore
practically useful.
The third way of speeding up the integration is to
use an adaptable time step. Although elaborated time
steps policies can be usedJ 4 we have a simple heuristic
one: every n, time steps, E, the relative variation of
0.04
Fig. 8. Comparison of the results obtained by an analogue
and a biased simulation: marginal distribution of the y
variable.
£(t), is calculated. If E < E,,,i,,, the time step is doubled,
while it is divided by 2 if E > E,...... .
These methods have been applied to the benchmark
presented in the previous section, on a IBM RISC
6000 workstation. Table 2 gives the different
c o m p u t a t i o n times, using the R K 4 and the R K 2
m e t h o d s , with or without m e m o r i z a t i o n . Since there
are no observable differences in the results obtained
for the distributions, the reduction of the computation
times is a direct benefit. If the R K 2 scheme logically
results in a twice as fast calculation, one can see the
obvious advantage of memorizing the dynamic
evolution of the variables. O f course, if this m e t h o d
looks quite attractive for a system with a reduced
n u m b e r of transitions, the gain could b e c o m e m u c h
smaller for larger systems. Anyway, we believe it is
still w o r t h using it in such circumstances. If there are
m a n y possible initial states, one can limit the
application of the m e m o r i z a t i o n to those states whose
initial
probability
is
larger
than
a
predefined
threshold.
T h e n , the same p r o b l e m has b e e n run with a time
step h constrained to evolve in the interval [h°/4, 4h°],
where h ° is the initial value of h. Table 3 contains the
theoretlco~
onologue : 5 0 0 0 0 histories
- - onologue : 1 5 0 0 0 0 0 h i s t o r i e s
- - - blo~KI : 5 0 0 0 0 histories
....
.
1
onologue
~0
J
~
....
: 50000
h;stories
onologue : 1500000
histories
0.03
.
=
0.02
:
11"
.~ .
:
.~
0.7
"~
0.5
"
0.6
o
O.Ol
i
~
: :t
.
.
'-
0.00
1.07 1.08 1.09 1.10 1.11
1.12 1,13 1.14 1.15 1,16 1.17
X
Fig. 7. Comparison of the results obtained by an analogue
and a biased simulation: marginal distribution of the x
variable.
0.4
0.3
N
0.2
ii
0.1
j ~
0.0
~
|
["
. . . . . . . . . . . . . .
1.07 1.08 1.0g 1.10 1.11
1.12 1.13 1.14 1.15 1.16 1.17
M
Fig. 9. Comparison of the standard deviation obtained by an
analogue and a biased simulation: x variable.
P. E. Labeau
70
onologue : 5 0 0 0 0 histories
onologue : 1 5 0 0 0 0 0 histories
bioesd : 5 0 0 0 0 histories
1.0
0.9
0.8
g
0.7
i~
0.6
t V :,
0.5
0.4
0.2
0.1
0.0
Table 3
T
Integration
scheme
0.1
0.1
2.0
2.0
RK4
RK2
RK4
RK2
e,,,,, - 0 . 2 5 ( % ) :
histories
I
1 .o4
1.1~
,
.',0
1 .o8
,.;2
,.;,
,.',6
No
Memorization
memorization
(nbt = 50)
13"3
7"3
3'44"4
1 '49"5
e ...... = 2 . 5 ( % ) :
7"9
4"8
2' 12"5
l '07"4
h"=10
s:
5×104
,.;8
Fig. 10. Comparison of the standard deviation obtained by
an analogue and a biased simulation: y variable.
computation times for some of the cases mentioned in
Table 2. Comparing the initial method (RK4, no
memorization, constant time step) to the last one
(RK2, memorization, adaptable time step), it can be
seen that one order of magnitude has been gained in
the computation time.
Finally, we have checked how the choice of
p a r a m e t e r s e,,,i,, and e,...... could affect the results.
Indeed, a too fast integration would lead to unprecise
results, while a too small time step would needlessly
lengthen the computation time. Several attempts have
been made with different values of e,,,,, and E,...... ;
Table 4 gathers the corresponding results.
It has to be noticed that these acceleration
techniques do not influence the variance, but only the
time per history. Therefore, the efficiency of the
simulation, defined as usual by (o-2T) ~, where ~r2 is
the variance of the estimate and T the computation
time per history, is increased in the same proportion
as the integration is speeded up.
simplified and slightly modified version of the problem
of a transient in the fast reactor Europa. 7~~ We study
the effects of a reactivity slope on the reactor. Five
physical variables describe the dynamics: the power
P(t), the sodium t e m p e r a t u r e T(t), the fuel
t e m p e r a t u r e T,(t), the.flow of coolant G(t) and the
angular speed of the primary p u m p w(t). A
supplementary variable y(t) is used to model the
progressive introduction of antireactivity due to the
scram, which is triggered on by the crossing of either a
power threshold or a coolant t e m p e r a t u r e threshold.
Three failures are considered: the p u m p m o t o r stops,
the coolant t e m p e r a t u r e sensor fails, the power sensor
fails. The system has thus 8 different states, denoted
(i,j,k): i = 1 if the p u m p m o t o r works, and 0 else,
similarly, j and k give the state of the coolant
temperature sensor and the power sensor respectively.
The state graph is given in Fig. 11.
We use a point kinetics model without delayed
neutrons for the evolution of the power:
dP(t) _ p(t)
- -
dt
A
(10)
P(t).
7 A SIMPLIFIED M O D E L OF A FAST
R E A C T O R T R A N S I E N T I N D U C E D BY A
RAMP
T h r e e p h e n o m e n a are opposed to the increase of
reactivity: the control rods when safety thresholds are
reached, the negative t e m p e r a t u r e coefficient of the
m o d e r a t o r and the D o p p l e r effect. Therefore, the
reactivity has the following form
In this section, we apply our scheme to model a
p(t) = a t - A , , , y ( t ) -
The meaning of the different p a r a m e t e r s is given in
Table 5.
Table 2
T
Integration
scheme
computation time
No
memorization
0.1
0.1
2.0
2.0
RK4
RK2
RK4
RK2
a s ( T ( t ) - To)-aD(Tc(t)--T~,,).
Table 4
memorization
nbt-10
nbt=20
nbt-50
e,,,,,(%)
e,,,,,~(% )
computation time
25"4
12"8
8'55"5
4'11"0
24"6
12"4
8'32"3
3'58"5
24"1
12"1
8'19"4
3'52"8
0.25
0.25
0.1
0.2
2.5
1.0
1.0
1.0
1'07"4
1'06"5
2' 10"3
1'06"7
44"3
21"2
14'01"5
6'31"0
nbt = n u m b e r of memorization times:
5,10 4
histories
h " - 10 3: T - 2.0: nbt = 50:5 × 104 histories
A Monte Carlo estimation o f the marginal distributions in a problem ofprobabilistic dynamics
....
//~(l
'
(L0a)
J,l)
'
F r o m the e n e r g y conservation law, we obtain an
evolution e q u a t i o n for the s o d i u m t e m p e r a t u r e :
~
G
/
(l,0,0)
x,
d(T(t)-
t
dt
"
Xp
~
71
Ar
]
T,,)
P(t)
- t 2 A G ( t ) r CR
{
1
[
1
dG(t)
G-(t)
dtt
1]
+-r
1: working
× (T(t) - L)}H(Ti~,b -- T(t))
(11)
(0.0A)
Stat, e (i.3.14)
i
j
k
slate
of t h e
where A is the section of the channel, T,, the coolant
r e a c t o r inlet t e m p e r a t u r e , Te~ the sodium boiling
point, CR and CF the specific heats of the coolant and
the fuel, respectively, and r a time constant related to
pump
state of the coolant Lemperature sensor
'dale
of I h e
power
sensor
CF.
Fig. 11.
Table 5
ol S
coolant reactor inlet temperature
sodium boiling temperature
sodium specific heat
initial sodium temperature
temperature threshold
sodium temperature coefficient
CF
fuel specific heat
R
thermal resistance by length unit
time constant of the fuel heating
initial fuel temperature
Doppler coefficient
L
T:.\
CR
T(O)
Tma~
"t"
Too
ol D
P(0)
653(K)
1156(K)
1629 - 0.8329T(J/kgK)
873(K)
923(K)
2 × 1 0 ~(K ')
(12.54 + 0.017T,. - 0.117.10 4T2
+0.307 × lO-ST~)15.4894(J/kgK)
0.62 × 10 ~(K/W)
0.9031 × 10 ~'(s)
1893(K)
1.3 × 10 5(K ')
2000(MW)
2100(MW)
3.98 X 10 2(s)
0.09
2 X 10 2(s ~)
t*
initial power
power threshold
neutron production lifetime
Antireactivity of the scram
reactivity slope
time constant of the scram
c(o)
A
initial flow
section of the channel
4000(kg/m 2 s)
1.26(m 2)
first time constant of the pump
201.94(s ')
second time constant of the pump
201.99(s ')
Pmar
A
A~,r
O/
1
2(s)
lo
1
t,,
m~,gC2
L
m,,
L
g
gOo
K
CM
I
~/mmp
AT
Ap
5.56 × 10 6(kgm 2)
sodium specific mass
length of the circuit
acceleration of gravity
initial angular speed
friction constant
pump torque
moment of inertia
6000(s ')
10(kgm 2 s i)
60000(Nm)
10(kgm 2)
failure rate of the pump
failure rate of the temperature sensor
failure rate of the power sensor
10-~(s ')
4 x 1 0 ~(s ')
4 x 10 6(s i)
P. E. Labeau
72
The evolution law for the fuel t e m p e r a t u r e is
obtained by averaging on the fuel section the heat
transfer equation, which leads to:
dT~.(t)_
dt
RP(t)
r
T~.(t)
-
T,,
(12)
r
where R is the thermal resistance of the fuel by length
unit.
The flow of coolant is increased because of the
manometric height of the p u m p H , and decreased
because of friction. If we assume H = C I G 2 + C2w 2,
we get:
d
G(t)
_ [(G(t))2
1
m,,gC2 (w(t)2 _ coO)
+ LG(O)
1
t e m p e r a t u r e at short times, for which the situation is
more complicated, the different marginals show two
peaks. They represent the two situations the system
can be in: either the scram was triggered on before the
power sensor failed or the failure of this c o m p o n e n t
occured first. Since the control rods would normally
(i.e. without failures) fall into the core at t = 0.45s, the
peak corresponding to the evolution of the variables
without scram logically appears to be predominant at
t = 0.5s, but becomes less and less important at larger
times. The peaks observed are broadened because of
the failure of the p u m p motor, which influences
slightly the values of the variables at short times.
Table 6 gives the computation times for the different
trials.
Let us finally notice that this application could not
have been run with the synthesis method, s'9 since in its
current development it asks for dynamics which are
quadratic forms in the physical variables.
(13)
8 CONCLUSIONS
where m~ is the sodium specific mass, g the gravity
acceleration, L the length of the circuit, and to and t~,
two time constants.
The variation of the angular speed of the p u m p
comes directly from the equation of the kinetic
m o m e n t . If I is the m o m e n t of inertia of the pump, CM
its torque, and K a friction constant, we have
/do(t)
dt
= CM61' - K w ( t )
(14)
with 6/, = 1 if the p u m p m o t o r works, and 0 else.
Finally, we assume an exponential law for the
introduction of the antireactivity of the control rods,
with a time constant equal to their fall time in the core
dy(t)
dt-
?
(1 - y ( t ) )
(15)
where c~,,, switches from 0 to 1 when the scram is
triggered on.
The difference between the states appear in 6p and
in the time when a~r becomes 1. The numerical values
of the coefficients are gathered in Table 5, while Figs
12, 13 and 14 give the marginal distributions of the
three first variables at different times in state (0, 1,0),
i.e., when the p u m p m o t o r and the power sensor have
failed. Let us remind ourselves that these distributions
are normalized to the probability of being in the
corresponding state at the time of study.
The interpretation of the results is quite easily done.
Except perhaps for the distribution of the sodium
Simulation seems to be the only practical way of
dealing with the size of realistic problems of
probabilistic dynamics. Indeed, too many assumptions
have to be done in semi-analytical models, narrowing
the range of applications likely to be treated by this
approach. Moreover, Monte Carlo simulation can deal
with a non Markovian framework, when the
assumptions made in probabilistic dynamics are too
restrictive. To estimate the marginal distributions, an
analogue Monte Carlo game appears to be efficient.
Therefore, we have developed a biased scheme to
obtain all the marginals in one state i* at one time T.
The histories played follow paths of the state graph
leading to this state, and all the transitions are forced
to occur before T. Every history thus contributes to
the score. The scheme was first worked out for a non
repairable system, but a way to adapt it to the
repairable case was suggested, yet not fully demonstrated. The p r o g r a m was tested on a simple
two-dimensional three-state benchmark, and the
agreement between theoretical and numerical results
was satisfactory, even though the m e a n n u m b e r of
histories by estimation point was quite low. Then,
different ways of accelerating the integration of the
equations of the dynamics were introduced. They are
based on two constatations: the variables need not to
be known with a very high accuracy, and many
identical calculations are needlessly repeated in a
simulation. The implementation of these tricks in the
algorithm allowed us to m a k e the simulation up to ten
times faster, when applied to the benchmark. We
believe the acceleration techniques can also be used in
the estimation of more important characteristics (e.g.,
Morginol distribution of In(P/Po)
t-0.5 s
Morginol d i s t r i b u t i o n o f I n ( P / P o )
t-2s
9.OOE- 8
4.0o(-9
B.OOE-B
7.00E-B
3.ooE-9
6.00£
-8
5.00( - 8
2.ooE-9 •
4.00E-B
~.~-8
2.~-8
0.~0 ~
!
.
~
-
8
,
1.00(-9
l
0.(x~0 •
i
o.06o2 o.0607
,
i
,
0.0612
i
,
0.0617
i
,
i
,
i
o.0622 o.o627 0 . ~ 3 2
-0.2'-0.1
In(p/po)
MorQinol d i s t r i b u t i o n o f I n ( P / P o )
t=l s
.
,
.
,
_
,
_
,
.
,
_
,
.
,
.
A
,
' 010 ' 0.1 0.2 O~ 0.4 0.5 0,6 0.7 0,8 0.9
In(P/Po)
.
,
.
1.0
Morginol d i s t r i b u t i o n o f I n ( P / P o )
t:3s
6.00£-9
5.00E-9
5.00E-9
4.00( -9
4.00E-9
3.00E-9
3.00E-9
2,00( -9
2.00E-9
l 00( -9
1.00E-9
0.00[0
O.OOEO
,
i
-0.5-0.3-0.1
. . . . . . . . . . . . . . . . .
!
0.08 0.10 0.12 0.14 0.16 0.18 0.20 0.22 0.24 0.26
In(P/Po)
,
I
,
. . . .
0.1
0.3
,
0.5
-
0.7
,
0.9
. . . . . . . .
1.1 1.3 1.5 1.7
,.(P/Po)
Fig. 12.
Morginol distribution of (T/To)
t-O.Ss
Morginol d i s t r i b u t i o n o f ( T / T o )
t-2s
1.80E-8
9.00E-9
1.60£-8
B.00£-9
1.40£-8
7.00E-9
1.20E-8
6.00F-9
1.00E-8
5.00E-9
8.00E-9
4.00E-9
6.00E-g
3.00£-9
4.00£-9
2.00E-9
2.00(-9
1.00E-9
O.OOEO
1.0005
0.00£0
1.0010
1.0015 1.0020
1.0025
1.0030
1.0035
1.00 !.01
(T/To)
Morcjinol d i s t r i b u t i o n o f ( T / T o )
t:l
1.02 !.03 1.04 1.05 1.08 1.07 1.08 1.09
(T/To)
Morglnol d i s t r i b u t i o n o f ( T / T o )
t-3s
S
140E8
1 .I OE-B
I .OOE-8
g.OOE-9
B.00£-9
7.00E-9
6.00E-g
5.00E-9
4.00E-9
5.001[-9
1.20£-II
1.00£-8
ILIXX-9
6.00E-9
4,00[-g
2.00E-9
I .OOE-9
O.OOEO
2.lX~-9
O.OO(O
i
1.003
1.005
1.007
1.001)
1.011
1,01S
1.015
(T/PI'O)
Fig. 13.
0.95
1.00
i
i
1.05
i
i
1.10
(T/To)
i
1. ! 5
i
i
i
1.20
1.25
P. E. Labeau
74
Morginol distribution of (Tc/Tco)
t-2 s
Morginol distribution of ( T c / T c o)
t-0.5 s
5.00(-6
3.00( - 8
4.001[-6
2.50E-8
2.00E-8
3.00(-6
1.50E-8
2.00£-6
1.00E-8
1.00(-6
5.00E-9
0.00(0
O.OOEO
18
. . .20. . .22
24
'''8 2
. 28
. . . 30
.
,
.
,
.
,
.
,
.
,
."
j
.
•
.
,
A
.
,
1.00 1.02 1.04 1,06 1.06 1.10 1.12 1.14 1.16
(Tc/rco)
Morginol distribution of ( T c / T c o )
. .34. . 36
A
((Tc/Tco)-1.002)-I e+6
Morginol distribution of ( T c / T c o )
1-3s
t-1 S
1.80E-8
g.OO(-8
1.60E-B
B.O0( - 8
1.40E-8
7.00(-8
6.00(-8
1.20E-8
5.00( - 8
1.00(-8
4.00( - 8
8.00E-g
3.00(-8
6.00E-9
2.00(-0
4.00(-9
1.00( - 8
2.00E-9
O.OOEO
O.OOEO
'
"
'
-
'
-
'
-
I
,
I
i
I
|
I
I
I
1.0o9 I.OLO 1 .ol I 1 .o12 1.ol 3 1.o14 t .ol s 1.016 1 .ol 7
(re/To o )
0.9
1.0
1.1
i
I
i
1.2
I
1.3
i
I
1.4
i
I
1.5
(T¢/Tco)
Fig. 14.
Table
6
t
computation time
0.5
1.0
2.0
3.0
3'17"
6'22"
12'43"
19'00"
h" = 2.5.10 3: nbt = 100:5 × 104 histories
generalized unreliability, m e a n time to failure...).
T h e y constitute a simple alternative to the use of
neural networks, 6''~ for achieving this integration.
Their efficiency, as well as the parallelization of the
simulation, is one of the keys to d y n a m i c P S A for
large industrial systems.
REFERENCES
1. Siu, N., Risk assessment for dynamic systems: an
overview. Reliab. Engng Syst. Safety. 4 3 (1994) 43-73.
2. Devooght, J. & Smidts, C., Probabilistic dynamics: the
mathematical and computing problems ahead. In
Reliability and safety assessment for dynamic process
systems, N A T O ASI Series, Series F, vo1.120,
Springer-Verlag, Berlin, 1994.
3. Marseguerra, M. & Zio, E., Towards dynamic PSA via
Monte Carlo methods. Proc. of Esrel'93, Elsevier,
Amsterdam, 1993 415-427.
4. Devooght, J., lzquierdo, J. M. & Mel~ndez, E.,
Relationships between probabilistic dynamics, dynamic
event trees and classical event trees. Reliab. Engng Syst.
Safety (1995) (in press).
5. Devooght, J. & Smidts, C., Probabilistic reactor
dynamics, l.The theory of continuous event trees. Nucl.
Sci. Engng, 111 (1992) 229-240.
6. Marseguerra, M. & Zio, E., Improving the efficiency of
Monte Carlo methods in PSA by using neural networks.
Proc. of PSAM 11 (1994) 025.1-025.8.
7. Smidts, C. & Devooght, J., Probabilistic reactor
dynamics, II.A Monte Carlo study of a fast reactor
transient. Nucl. Sci. Engng, 111 (1992) 241-255.
8. Devooght, J. & Labeau, P. E., Moments of the
distributions in probabilistic dynamics. Ann. Nacl.
Engng. 22 (1995) 97-108.
9. Labeau, P. E. & Devooght, J., Synthesis of multivariate
distributions from their moments for probabilistic
dynamics. Ann. Nucl. Engng, 22 (1995) 109-124.
10. Lewis, E. E. & Bohm, F., Monte Carlo simulation of
Markov unreliability models. Nucl. Engng Des. 77
(1984) 49-62.
11. Kalos, M, H. & Whitlock, P. A., Monte Carlo methods,
Volume I: Basics, Wiley-Interscience, New York, 1986.
12. Spanier, J. & Gelbard, E. M., Monte Carlo principles
and neutron transport problems, Addison-Wesley
Publishing Company, Reading, 1969.
13. Marseguerra, M. & Zio, E., Monte Carlo approach to
PSA for dynamic process systems, accepted for
publication in Reliab. Engng Syst. Safety.
A Monte Carlo estimation of the marginal distributions in a problem
14.Press, W. H., Flannery,
B. P., Teukolsky, S. A. &
W. T., Numerical recipes. The art of scientific
computing
(Fortran version), Cambridge
University
Press, Cambridge, 1990, chap.16.
1.5.
Smidts, C., Simulation
des sequences
industrielles
accidentelles
prenant en compte le facteur humain.
Application au domaine des centrales nucleaires, Ph.D
thesis, Universite Libre de Bruxelles, 1991, chap. IX.
16.Marseguerra, M., Nutini, M. & Zio, E., Approximate
physical modelling in dynamic PSA using artificial
neural networks. Reliab. Engng Syst. Safety 45 (1994)
In-<
Vetterling,
47-56.
of probabilistic dynamics
H
min
( 0
Marginal
?,
x0
x
x0
i
Y--
“2-0.2
ep”?
distributions
Yo -
b,x
a3 - cJ
b3xo
4 - cj
if b3 = 0
-A1
Ir(3,x,t) =
1
azhl - a,Az - a3(A, - AZ);
APPENDIX
‘r~A,~~,,A~~L’.;(*,~h~)
Results of the benchmark
Assumptions
X H(x - x,le”~‘)H(xOeU~’- x)
a, # ci, i = 1...3, a, > a2 > a3, a3 > c3
rr?~,~‘,,*~~uj(*,-AZ,
b, =- b2
Yo
-_=
x0 al-cl a2 - c2 ’
Total
distribution
in state 3:
<r,A,-“,h~
AlAze
m,x,y,t)
X H(x - x,,e”“)H(x,,e””
c-1
(a~- ad(a3-
c.h(y.- ,““)
Al
r,,m<j
n(3,y,t) =
Y--
yo-
- x)
=
bxx
a3 - cj
b3xo
b+
a3 -
r,?*,~‘,,~~-~:(h,~A2,
(C81~ClKC~?~‘;)
}
“~*,~“,*2~‘i(*,~*2,
cl,-<;
a3 - c3
b2xo -
yo-
e_<x,)
X H( y - y,,e’;‘)H( y,)e”?’ - y)
H(x - x(,e”l’)H(x,,e”+ - x)
H
c4A1 -A2)+a,A2-a2A,y
[ [ 1 _ (;
a3 - c3
Y--
1
cj
X
;
(1
X H(y - y,Jen~‘)H(y,je”l’
- 1.
y)
75