thesis progress - University of British Columbia

Reliability Applications
Ricardo O. Foschi
Department of Civil Engineering
The University of British Columbia,
Vancouver, Canada
26/04/99
Outline
• Introduction
• Performance Function Definition
• Methods to Estimate Reliability
• Software
• Example: A Beam Design Problem
• Example: A Fire Application
• Example: Calibrating a Code
The response of a structure, as that of any other
engineering system, is influenced by a set of
intervening variables. These enter into a model
for the system, and some will control the system
capacity while others will be associated with the
demands.
Example: In beam bending, the bending strength
is a variable associated with the capacity, while
the bending stress produced by the applied
loads is the demand.
All variables will have a degree of uncertainty.
Therefore, both the capacity and the demands
will be uncertain or random.
There is then a probability that the variables will
combine, randomly, to produce a demand greater
than the capacity.
In such a situation the system will “fail” or will not
perform as intended. We can then calculate the
“probability of failure” of the system, and then
adjust its design to bring this probability to an
acceptable level.
Definition of Performance or Limit
State
Performance function:
G  C ( X C , d C )  D( X D , d D )
XC, XD : Random Variable Vectors
dC, dD : Design Parameter Vectors
Probability of failure = Pf = Prob(G<0)
Reliability = 1.0 - Pf
Software: RELAN
Why bother?
Reliability-based design does not mean that
older, traditional approaches were “unsafe”.
However, they were not based on a specific
quantification of safety, and also did not lead
to nearly uniform safety levels across
materials or structural applications.
Using reliability-based procedures, wood
structures can be designed to be “as safe” as
those made with steel or concrete, since we
can quantify safety and make them all equal
in this regard. Reliability in performance
forms the basis of new “Performance-Based
Design Code Guidelines”.
Reliability methods allow a clear introduction of
quality control procedures into the design
process.
These methods also allow a better use of
materials, particularly of mixes, and thus promote
innovation and a better resource utilization.
What is it required for each of the random
variables?
• Basic statistics for each one (mean, standard
deviation, cumulative probability distributions)
• Should they be modified for extremes?
• Should they be modified for lower bounds?
• Should they be modified for upper bounds?
• Are they correlated? If so, what is the
correlation structure?
• Usual cumulative probability distributions are:
Normal, Lognormal, Weibull, Gumbel,
Uniform, Gamma, Beta.
1.00
0.90
0.80
0.70
0.60
0.50
0.40
0.30
0.20
0.10
0.00
0
10
20
30
40
50
Fit of data with cumulative
distribution function F(x)
Reliability Estimation Methods
Simulation (Standard MonteCarlo):
Pf = Nf / N
where Nf is the number of times that G<0
out of a sample of N trials.
Disadvantage: Too large an N required for
small probabilities, time consuming.
Advantage: Approaches correct probability
as N   (can be used as a benchmark)
Approximate Methods
• FORM: First Order Reliability Method
This is a very efficient method based on the
calculation of a “reliability index ”. It is only
approximate. Three conditions must be
met for FORM to give exact answers:
All variables have to be Normal
All variables must be un-correlated
The G function must be linear
Basic Problem: FORM
Consider the following performance function:
G = X1 – X2
This is a linear function. Let X1 and X2 be
Normals, un-correlated variables.
Let x1 and x2 be normalized variables using
the mean and standard deviations of X1 and
X2:
x1=(X1-M1)/1
x2=(X2-M2)/2
Therefore, G < 0 can be expressed as:
x2 > (M1-M2)/2 + x1 1/ 2
Thus, the failure zone is
that above the straight y2
line G = 0.
Rotating the axes, the
failure zone is also
completely described by
y2 > 
G<0
x2
G=0
y1
G>0
P


o
x1
Since it can be shown that y2 is also a Normal
(Standard) variable, the probability of failure Pf
can be obtained from the Normal distribution
function :
Pf = (-)
Thus, FORM only requires the calculation of the
distance . This is the minimum distance from
the origin of coordinates to the line G = 0,
(failure surface separating failure from survival).
Algorithms have been developed to
transform all variables to Normals, and to
un-correlate them if they are correlated.
However, the linearity of G is a feature of
the specific problem and cannot be
modified. These transformations and the
general nonlinearity of G contribute to the
approximate character of the FORM
approach.
Normally, the FORM results are very good,
with a large gain in computational
efficiency.
26/04/99
Algorithm to obtain 
• General algorithms
for the calculation of
the reliability index 
are well developed
(e.g., Rackwitz and
Fiessler, 1978).
These are quasiNewton, iterative
procedures based
on the gradient of G.
G

G*
x
o
x*


P
x
P*
Once  is found, the point P on the failure
surface G = 0 is determined. This point is
called the “design point” or “most likely
failure combination point”. The vector
joining the origin O with P is called the
sensitivity vector n, because its direction
cosines give the derivatives of  with
respect to the variables x:
ni = /xi
These are very useful to determine the
most important variables in a problem.
Multiple failure modes:
A system may fail in a variety of ways. For
each failure mode, a separate G function
must be written.
If failure of the system is triggered by the
failure of at least one mode, then the
system is called series or brittle.
Two modes can be correlated if they share
some of the variables (e.g., snow loads can
produce simultaneous beam failure in
bending and in shear).
Example: Beam Design
P+D
Mode 1: Failure in bending
G1 = X(2)- 6 [X(3)+X(4)]L /(4.0 X(5) X(6)2)
Mode 2: Deflection limit
G2 = L/K – 48 [X(3)+X(4)]L3/(X(1) X(5) X(6)3/ 12)
Random variables:
X(1) = modulus of elasticity E (Mpa)
X(2) = bending strength fb (Mpa)
X(3) = concentrated dead load D at midspan (kN)
X(4) = concentrated live load P at midspan (kN)
X(5) = beam width (m)
X(6) = beam depth (m)
Deterministic parameters:
L = beam span (m)
K = deflection criterion limit
Note that the G functions are quite nonlinear
and, in addition, some of the random
variables may not be Normal, requiring
transformation to Normality.
One would then expect that FORM would
have some error, to be assessed by carrying
out a simulation.
Let us run the software RELAN for this case.
Another Example: Reliability Considering Fire
P+Q
B
H
L
e
Consider a glulam beam of dimensions B, H
and span L, with fire exposure creating a
burnt or charred section of thickness e.
The burning rate is a constant , so that after
time T the amount burnt is e = T.
P and Q are two distributed loads (one
permanent, the other live)
Random Variable
Mean
Std.
Distribution
Deviation
f , bending strength 35,000 5,250
Lognormal
(kN/m2)
P (kN/m)
2.0
0.2
Normal
Q (kN/m)
8.0
2.0
Extreme
Type I
 (burn rate,
0.5
0.125
Lognormal
mm/minute)
E, modulus of
12,000 1,800x103 Lognormal
elasticity (kN/m2)
x103
1) Bending failure under normal conditions:
(Q  P) L2
1
H
G1  f 
( )
3
8
( BH / 12) 2
[
2) Deflection under normal conditions:
5(Q  P) L4
L
1
G2 

180
384
E ( BH 3 / 12)
3) Bending failure under fire conditions:
(Q  P) L2
1
H
G3  f 
(  e)
3
8
(( B  2e)( H  2e) / 12) 2
Corresponding target (desired) reliability levels are:
1 = 4.0 for failure under normal circumstances
2 = 2.0 for deflection under normal circumstances
3 = 2.5 for failure under fire conditions after
T= 60 minutes of exposure
The problem is to find optimum values H and B for
the cross-section that will achieve the specified
reliabilities or deviate a minimum from them.
Optimization Results (Performance-Based Design)
H (m) B (m) 1
2
3
Optimization
Achieved Achieved Achieved Strategy
0.533 0.249
4.32
1.91
2.32
Unconstrained
0.536
0.257
Target 
4.47
2.09
2.50
4.00
2.00
2.50
Forcing 3 =
2.50
Another Example: Code Calibration
Let’s assume that a Code design equation is
of the form:
 d DN   L LN  RN S / 
where d ,L= Load factors
 = Resistance factor
DN, LN = Nominal, design load effects
RN = Nominal resistance
S = Geometric design parameter
We use the design equation to calculate the
parameter S, e.g., in bending, S = bh2/6.
The load and resistance factors must be such
that the designed beam achieves a target level
of reliability under the actual, random loads.
In order to calculate the achieved reliability we
must use the performance function G:
G  RS  ( D  L)
Introducing the calculated S into the
function G , we can write it as
 R
G 
R
 N
where


rd  l 

    r   
d
L

d = D/DN , normalized dead load
l = L/LN , normalized live load
r = nominal dead/live load ratio, DN/LN
Using the last form of G, we can find the reliability
index  as a function of the load and resistance
factors, choosing the combination which provides
for the desired target.
Notice that the selected combination will only
provide the desired  for the specific properties
used for the loads d and l. If the live load is snow,
and the location changes, the statistics of l will
change and the reliability will change. Thus, a few
factors in the design equation cannot provide for
constant reliability across conditions and
applications. The factors must be optimized to
minimize the variation in reliability.
A customized approach avoids the Code and
estimates the reliability achieved in each
particular case.
A customized approach would permit the use of
more sophisticated structural analysis software
instead of the simpler methods in the design
equations. This would also promote a system
analysis rather than only components.
A Limit States Design Code is only a short-cut
and not a proper reliability analysis.