(Initial page layout)

(Initial page layout)
Quantitative risk analysis for long-lived water assets
B. Ward*, A.Selby**, A,Palmer**, S. Gee** and P.Davis***
*AECOM & Centre for Water Systems, Exeter University, UK, EX4 4QJ (E-mail: [email protected])
** AECOM, 63 - 77 Victoria Street; St Albans, Hertfordshire, UK, AL1 3ER
(E-mail: [email protected], [email protected], [email protected])
*** CSIRO Land and Water, 37 Graham Road, Highett, Australia , VIC 3190. (E-mail: [email protected] )
Abstract
A suite of risk based modelling tools has been developed to assist water
companies in the production of asset management plans for long-lived and highly
critical water assets. The authors present a Quantitative Risk Analysis (QRA)
framework which is founded on customisable models for the assessment of asset
reliability, consequence of failure and intervention optioneering. The approach has
been applied within the UK water industry to a variety of linear and non-linear
water carrying assets of critical importance; tunnels, conduits, masonry aqueducts,
pipe bridges and service reservoirs. The overall goal of the QRA framework is to
permit the identification of optimised maintenance regimes and pro-active
intervention options which can be planned systematically, giving due
consideration to asset risk.
The paper specifically focuses upon the innovative reliability modelling tools that
the authors have developed which form the foundations of the overall QRA
framework. These take the form of deterministic models, physical probabilistic
models and an enhanced condition to reliability mapping technique.
Keywords
Quantitative risk analysis; deterioration; asset performance; asset management
Introduction
The effective management of infrastructure asset failure risk requires the knowledge of asset
condition, their rate of deterioration, and the predicted consequences of failure (Kleiner et al.,
2006). Egerton (1996) recognised that the benefits of employing Quantitative Risk Analysis (QRA)
techniques have been widely acknowledged in the oil, nuclear and chemical industry. However, the
application of such techniques has historically been less prevalent in the management of water
industry assets. Traditional approaches to estimating failure probabilities in below-ground water
networks rely on historical failure data. Generally, where historical data is plentiful, statistical
methods are used in which water main breakage data is fitted to time-exponential functions and
future failure rates extrapolated (Jarrett et al., 2001). While these methods continue to be widely
used to support asset management, they also require large failure databases as a basis for analysis.
Assuming good quality data, this does not pose a problem for small diameter low consequence
mains, which are often operated under a “run to failure” strategy. However, for larger diameter,
business-critical pipeline assets, there is much less recorded failure data and proactive strategies are
required to pre-empt high consequence failure events.
The fundamental approach to risk analysis is the ability to define an assets reliability and the
associated consequence of the assets failure (Pollard et al., 2004). The water industry owns and
operates a large number of long-lived and potentially highly critical infrastructure assets which tend
to be variable and complex in nature. Within water infrastructure systems a high degree of
variability and complexity exists in the analysis of both reliability and consequence. This is partly
due to the individuality of water infrastructure assets, but it is mainly due to the dependencies and
interactions that exist between individual components, sub-assets and assets. In response to this
challenge, the authors present a number of bespoke risk based modelling tools that establish an
effective QRA approach for water assets. The approach has been applied within the UK water
industry to a variety of linear assets (tunnels, conduits and siphons) and non-linear assets (masonry
aqueducts, pipe bridges, well houses and service reservoirs). The outputs from the QRA framework
have formed the foundation of asset management plans by allowing for optimised maintenance
regimes and pro-active interventions to be planned systematically. Figure 1.1 visualises how the
consequence, reliability and intervention models fit together within a quantitative risk analysis
framework.
Local Impacts
Restoration
Cost
Resiliance & contingency
plans
Direct
consequence
In-direct consequence
Consequence Models
Theoretical
Physical
probablistic
Asset Management Plan
Condition to
reliability
mapping
Quantitative Risk Framework
Reliability Models
Feasibility
Cost
Time
Intervention Optioneering
Figure 1. Quantitative Risk Analysis Framework
Reliability Modelling
It is a requirement of the UK Water industry’s economic regulator OFWAT to justify capital
investment using risk based approaches. The Capital Maintenance Planning Framework (CMPF),
published by UKWIR (2002), sets the industry benchmark for risk based decision making,
considering the probability and consequence of failure. However, it has been recognised that despite
the successful application of the CMPF principals for assets with a robust history of failure, the
principles present a number of challenges for assets with limited failure data; termed “long-life, low
probability” assets (UKWIR, 2011).
The authors here-by present three bespoke reliability models that are suited to long lived assets. The
models take one of three forms, depending on the nature of the assets being modelled and the
availability of data: (1) simplified deterministic based models where the behaviour of the structure
is understood and can be defined by mathematical formula but in the absence of detailed
information, (2) physical probabilistic models where the characteristics of the asset are understood
and failure or partial failure data is available, and (3) enhanced condition to reliability mapping.
The latter is used where only inspection information is the only available data source for the asset,
i.e., in the absence of failure records, and the impact on structural performance of each element can
be defined.
Simplified deterministic models. Generally speaking, deterministic models such as Finite Element
Analysis (FEA) are built around the understanding of the underlying physical parameters that
govern the failure of an asset. The behaviour of these physical parameters are modelled to
determine the response of the structure to a variety of conditions. FEA achieves this by solving a
series of underlying structural analysis equations for an inter-connected mesh of smaller elements
used to represent the overall structure (Fagan, 1992). For accurate behavioural predictions to be
obtained, these models require a detailed level of input information. This includes knowledge of the
geometry and material properties of the structure itself, along with the loading configurations it is
subjected to. Whilst the approach is well-established and well-researched in the engineering
industry, its use for modelling the behaviour of water infrastructure assets is not often feasible. This
is partly due to the linear nature of the assets which gives rise to variations in loading conditions,
soil characteristics and asset geometry, along the structures length. However, it is even less suitable
for older water carrying assets which are constructed as buried unreinforced arch structures. For
such structures, FEA can be a useful tool for the identification of potential cracking zones, but it has
fundamental limitation when used to model the stability and collapse mechanisms of the structure
(Block et al., 2006).
To overcome these limitations, the authors present the use of a simplified thrust line equilibrium
model which can be used to adequately understand the structural behaviour of individual sections of
unreinforced arch structures, i.e., large sewers, conduits or tunnels. The model is presented in a
user-friendly spread-sheet format which is founded on first principle arch analysis. The approach is
becoming increasingly well recognised for its ability to understand the range of forces acting on a
structure to permit a state of equilibrium to be achieved, i.e., without rotation or sliding (Harvey,
2001). Given that the model can resolve a state of equilibrium, via manipulation of the magnitude of
the external forces acting on the structure, a reliability assessment is made by considering the
likelihood of the localised ground conditions being able to exert the required force identified by the
model. A geotechnical assessment of the soil properties at each section has then been undertaken to
provide a lower and upper value for the internal angles of friction (Φ) at each location, together with
an estimate (cumulative) of their uncertainties. It has been assumed that the distribution of the
internal angles of friction at each location will follow a Normal Distribution. Figure 2 illustrates
this distribution based on a lower Φ value of 28° and uncertainty of 25% and an upper Φ value of
32° and uncertainty of 75%.
Having defined both the load capacity of the ground and the load demand identified by the model,
the Reliability can be determined by evaluating the required passive pressure (Kreq) against the
likelihood of the local geology being able to provide this pressure. For example the Colomb
equation (Equ 1) can be solved to derive the necessary soil properties at a given location, where the
model has identified that a required passive pressure (Kreq) of 1 N/mm2 is needed to retain a state of
equilibrium. At this location; α = angle of wall relative to horizontal (90°), β = angle of ground
slope (21°), and δ = angle of friction between wall & soil (0°). Hence, the internal angle of friction
(Φ) for the soil at this location is solved, 21.73°.
Figure 2. Normally distributed soil assessment
𝑠𝑖𝑛2 (𝛼 + 𝜙). 𝑐𝑜𝑠𝛿
𝐾𝑟𝑒𝑞 =
2
sin(𝜙 + 𝛿) . sin(𝜙 − 𝛽)
(𝑠𝑖𝑛𝛼. sin(𝛼 − 𝛿) . [1 − √
] )
sin(α − δ) . sin(α + β)
(Equ 1)
Displaying this on the corresponding soils distribution, indicates that there is a small probability of
failure which is represented by a small, almost triangular area to the left of the 21.73° line. This can
be illustrated more clearly by the using a cumulative distribution plot, that is suitably transformed to
represent Reliability. Hence, when φ = 21.73°, then R = 0.99736.
Figure 3. Cumulative distribution function
Whilst the model is a simplified representation of the structure’s behaviour when compared to FEA,
the authors deemed the approach suitable where detailed soils data is generic and there are
uncertainties in the knowledge of structural attributes (i.e. thickness, precise geometry and
compressive strength). It also lends itself to moderately large-volume batch analysis, using an
optimising routine that automatically searches for the appropriate set of solutions that balance out
the internal forces, thus in-turn identifying the external forces that each segment requires to retain a
state of equilibrium.
Physical probabilistic models. The development of so-called Physical-Probabilistic Models (PPM)
is a method that has been used previously to support the asset management of large diameter cast
iron mains (Davis and Marlow, 2008) and newer mains with limited failure history (Davis et al.,
2007). In the absence of historical data, this approach relies on physical models that are based on a
fundamental understanding of the actual deterioration and failure processes that occur in practice.
For example, models have been developed previously to predict the failure of buried cast iron
pipelines by combined corrosion and brittle fracture (Davis and Marlow, 2008),(Sadiq et al., 2004).
While the physical model captures realistic processes, uncertainty is also introduced via the
representation of key model variables as stochastic, rather than single-valued quantities. In practice,
PPM variables (such as external corrosion rate for example) are represented by appropriate
probability distributions (with mean values and variances) as opposed to single numbers. Monte
Carlo simulation methods are then used to repeatedly “sample” the predicted lifetime of a
hypothetical set of pipelines each with a randomly assigned corrosion rate from the underlying
distribution (Mogila et al., 2008). The end result is a set of predicted lifetimes for the simulated
cases, which are then fitted to their own probability distribution. This allows curves of failure
probability vs. age to be estimated.
As part of the development of an asset management plan for critical pipelines, PPMs have been
developed based on approaches in the literature (Davis and Marlow, 2008). As input to the PPMs,
appropriate probability distributions are defined for different categories of soil corrosivity identified
in the field.
Where available, results from previous Non-Destructive Testing (NDT) of the
remaining wall thickness in buried pipelines can be used to derive a probability distribution for
linear corrosion rate (in mm/year). Where NDT test data is unavailable, derived distributions are
scaled for remaining soil categories, based on expected linear corrosion rates quoted in the literature
(Shreir et al., 1994; and UKWIR, 2001). With probability distributions established for corrosion
rates in different soil types, PPM’s for below-ground pipelines can be applied using Monte Carlo
simulation following methods outlined in the literature (Moglia et al., 2008).
Table 1 shows an example output from a PPM, applied to a set of below-ground cast iron and steel
pipes. The results show the number of pipe segments with different estimated future failure
probabilities over an assessment period between 2011 and 2056.
Table 1. Number of pipe segments and probability of failure over time (5 year increments)
Prob.
2011
2016
2021
2026
2031
2036
2041
2046
2051
2056
>50%
27
34
34
37
43
43
43
47
57
69
>80%
20
25
29
34
34
37
37
37
43
47
>95%
18
20
25
29
34
34
34
37
37
37
>98%
18
18
20
25
29
34
34
34
37
37
Condition to reliability mapping. An assessment methodology has been developed to better
understand and quantify the reliability of “long-life, low probability of failure assets”, by utilising
either existing, or purposefully obtained, condition inspection information. The approach has been
used to derive a reliability score for a number of UK water companies for assets such as; service
reservoirs, masonry aqueducts, pipe bridges and valve houses. One of the major benefits of the
approach is its consistency and integration with the aforementioned deterministic and physical
probabilistic models, allowing a consistent platform to compare the reliability of assets modelled,
using one or more assessment methodologies. The condition to reliability methodology is developed
based on a Severity and Extent approach, where-by the severity of damage/defect and the spatial
extent of the damage/defect are assessed to give a condition score for a particular element, similar to
that used for bridge condition inspections, (CSS Bridges Group, 2002; and CSS Bridges Group,
2004). An Element Weighting Factor is also included to account for the criticality of each element
in relation to the structures overall stability. The CSS Severity and Extent descriptions are tabulated
below in Table 2. For example, a code of 5C would define an element as non functional/failed to a
moderate extent.
Table 2. CSS Table G.8 & G7 Severity & Extent Descriptions
Severity
Code
1
2
3
4
5
Extent
Code
Description
As new condition or defect has no
significant effect on the element
Early signs of deterioration, minor
defect/damage, no reduction in
functionality of element
Moderate defect/damage, some loss of
functionality could be expected
Severe defect/damage, significant loss
of functionality and/or element is close
to failure/collapse
The element is non-functional/failed
Description
A
No significant defect
B
Slight, not more than 5% of surface
area/length/number
C
Moderate, 5% - 20% of surface
area/length/number
D
Wide: 20% - 50% of surface
area/length/number
E
Extensive, more than 50% of surface
area/length/number
Given that limited failure data is available for these assets, and by definition this type of modelling
is binary, i.e., there are only two states; either survival or failure, it has been assumed that the
Logistic function is appropriate for modelling reliability. In order to calibrate the logistic function, it
is necessary to define it by at least two points. However, given that there are no tangible
benchmarks available in terms of actual failure, these points are assessed on a reasoned basis, using
a weighted condition grading. The Weighted Condition Grades consist of 70 values, of which 31 are
distinct and these range between 0 and 16. Table 3 shows the mapping from the Permissible
Condition Grades to give the weighted values, for each combination. From this table a logistic
function is proposed to define reliability using two parameters to fully specify it in the form:
𝑃(𝑤) = [
1
]
1 + exp(−(𝐴 + 𝐵. 𝑤))
(Equ 2)
Where; A is a location parameter, B is a shape parameter and w is the Weighted Condition Score.
This function is similar in shape to the cumulative Normal Distribution (ie “S-shaped”) and when it
is expressed as a probability distribution function (pdf), it is symmetrical about its mean.
Table 3. Condition grade to weighted value mapping
1A
Permissible Condition Grade (Severity: 1 – 5; Extent: A – E)
2D
3D
4D
2B 2C 3B 2E 3C 4B 3E 4C 5B 4E 5C 5D
Condition Grade Score (Severity x Extent)▼
0.8 1.2 1.6
2
2.4 3.2
4
4.8 6.4
8
9.6 12.8
0
0
0
0
0
0
0
0
0
0
0
0
Weighting Factor▼
0.2
No effect to stability 0.00
0
Important to long
0.25 0.05 0.2
term durability
Important to long
0.50 0.1 0.4
term stability
Critical to the partial
0.75 0.15 0.6
stability
Critical
to
the
1.00 0.2 0.8
overall stability
5E
16
0
0.3
0.4
0.5
0.6
0.8
1
1.2
1.6
2
2.4
3.2
4
0.6
0.8
1
1.2
1.6
2
2.4
3.2
4
4.8
6.4
8
0.9
1.2
1.5
1.8
2.4
3
3.6
4.8
6
7.2
9.6
12
1.2
1.6
2
2.4
3.2
4
4.8
6.4
8
9.6
12.8
16
Good
Fair
Adequate
Poor
Inadequate
Upon examination of Table 2 and Table 3, the following assumptions are made that allow the
condition scores to mapped to a reliability function: (1) It is possible to calibrate the logistic
function in a manner that reflects the likely reliability behaviour for each element of the asset with
respect to its weighted condition score; (2) Whilst the weighted condition scores are discrete, they
lie on the smooth logistic function curve and that where they are equal they produce the same
reliability value; and (3) It is assumed that there is a significant range of reliabilities within the
weighted condition scores in the Red (“Inadequate”) region, effectively providing a failure
likelihood that ranges from relatively minor to major.
Table 4. Textual definitions and mapping to reliability
Severity (S)
Extent (E)
Weighting Factor (W1)
Weighted Score (P)
Mapped Reliability (R)
Any
Any
0
0
≈1
4
E
1
8
0.95
5
E
0.5
8
0.95
5
E
0.75
12
0.50
5
E
1
16
0.05
Interpreting these assumptions into the mathematical relationship for a Logistic Function, the
reliability of the two defined points are thus:
I.
𝑅1 = 𝑃(𝑤1 ) = 𝑃(16) = 0.05, where w1 is the weighted condition grade P(w1) is the
probability an asset in this state surviving, 95%.
II.
𝑅2 = 𝑃(𝑤2 ) = 𝑃(8) = 0.95, i.e., survival probability is 5%.
The resultant mid-score of 12 will map to a Reliability of 0.50, due to the symmetry of the Logistic
function. On the basis of this mapping of severity, extent and weighting factors to reliability the
parameters A and B can be found by rearranging the Logistic Function in the form of (Equ 3) and
(Equ 4).
𝑃(𝑤2 ). (1 − 𝑃(𝑤1 ))
𝑅2 . (1 − 𝑅1 )
ln [
]
ln
[
]
𝑃(𝑤1 ). (1 − 𝑃(𝑤2 ))
𝑅1 . (1 − 𝑅2 )
(Equ 3)
𝐵=[
]=[
]
(𝑤2 − 𝑤1 )
(𝑤2 − 𝑤1 )
𝐴 = [ln [
𝑅1
] − 𝐵. 𝑤1 ]
1 − 𝑅1
(Equ 4)
Substituting in for (R1, w1) and (R2, w2), it can be shown that A ≈ 8.83331, B ≈ -0.73610. The
Logistic Equation is then used to produce the following reliability mapping over the range of
weighted condition scores, Table 5 and Figure 4.
Table 5. Reliability mapping
1A
2B
2C
Permissible Condition Grade (Severity: 1 – 5; Extent: A – E)
2D
3D
4D
3B
2E
3C
4B
3E
4C
5B
4E
Condition Grade Score (Severity x Extent)▼
1.6
2
2.4
3.2
4
4.8
6.4
8
5C
5D
5E
WF
0.2
0.8
1.2
9.6
12.8
16
0.00 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999
0.25 0.9999 0.9998 0.9998 0.9998 0.9998 0.9998 0.9997 0.9997 0.9997 0.9995 0.9994 0.9992 0.9985 0.9972
0.50 0.9998 0.9998 0.9998 0.9997 0.9997 0.9997 0.9995 0.9994 0.9992 0.9985 0.9972 0.9950 0.9841 0.9500
0.75 0.9998 0.9998 0.9997 0.9997 0.9996 0.9995 0.9992 0.9987 0.9979 0.9950 0.9881 0.9716 0.8540 0.5000
1.00 0.9998 0.9997 0.9997 0.9995 0.9994 0.9992 0.9985 0.9972 0.9950 0.9841 0.9500 0.8540 0.3569 0.0500
Figure 4. Condition to reliability mapping graph
Whilst the mapping is founded on a qualitative interpretation of the weighted condition score for
each component mapped to a reliability score, it is considered that this approach provides an
appropriate means for high-level assessment of the asset’s viability in a fair and balanced manner.
The product of the reliability scores for individual components is then used to provide an overall
reliability score for the entire asset.
Consequence models
If risk can be expressed in a clear and quantitative manner, the job of decision making can be
informed by the true cost(s) and benefit(s) associated with the decision (Kaplan and Garrick, 1981).
An essential part of quantifying risk is the accurate capture of the consequence costs that are
associated with the failure of an asset. The consequence analysis in this study is undertaken as an
economic assessment which captures the monetary value associated with the direct and indirect cost
of failure. The term “direct costs” refer to the cost incurred to restore the functionality of the asset,
whilst “indirect costs” account for the additional costs that may arise from its failure, e.g.,
infrastructure damage or fines (Ugarelli et al., 2010). Interruptions to customer supplies and other
non-compliant performance measures are also captured within the indirect costs in the form of
penalty functions (Engelhardt et al., 2002).
The authors have used a desk-based consequence analysis to accurately capture these direct and
indirect costs arising from local asset failure. A Geographical Information System (GIS) is used to
help determine the different costs that are incurred by the failure of a section of linear asset at any
location along its length. For example, the failure of a siphon beneath a railway would likely incur a
higher associated consequence cost than if the same level of failure occurred within a field. These
localised indirect costs are captured via the use of automated geospatial queries which identify the
proximity of other critical infrastructure assets to each linear segment of the overall asset. The direct
consequence costs are captured using a repair feasibility assessment to identify the site specific
costs applied to each of the segments. This process allows for conventional repair costs to be
inflated where known site conditions may cause the restoration to be problematic, i.e., where access
conditions may be troublesome, or, the asset is constructed in poor ground (Martin, 2005).
Obtaining individual site specific cost(s) for long-spanning linear assets is an unrealistic process.
Therefore, a three phased methodology is implemented.
Phase one is used to establish a restoration activity matrix comprised of the required activities to
restore the asset (matrix rows), i.e., mobilisation, access, site work, repairs and restoration. The
columns of the matrix define the varying degree of severity depending on the nature and extent of
work required at specific locations. The second phase of work uses the severity ratings held within
the restoration activity matrix to manually score each segment of the linear asset depending upon
the localised conditions at each segment. For example, considering the restoration of a buried
conduit the restoration activity difficulty would range from 1 to 5; where (1) would signify a
shallow conduit in reasonable ground conditions and (5) would classify a deep conduit in poor
ground. This phase of work is again a GIS desk based exercise to encapsulate the knowledge and
experience from the asset operators. Each segment is visualised in GIS and overlaid with various
data sets, i.e., background mapping, digital terrain data, aerial photography and geological data, to
provide a full understating of localised obstacles that impact on the cost and/or time of restoration.
Figure 5. GIS linear asset restoration visualisation
The third phase of work populates each valid work item combination in the activity severity matrix
with the cost and time information associated with the level of restoration activity. This ensures that
due consideration is given to the additional activities that would be required for the restoration of
assets of more difficult severities, where a score of (1) typically denotes the base level cost and time
incurred for the minimum level of difficulty. In the previous example, the additional costs would be
captured for a restoration severity (5) to account for the increased depth and poor ground conditions
that present the need for additional enabling works prior to repair of the asset.
Risk
The risk of asset failure is derived as a combination of the reliability and consequence analysis
performed at asset element level. The element level is defined depending on the nature of the
structure being analysed. For linear assets such as conduits, tunnels and siphon pipes, the asset
element adopts the practical construction length that constitutes to the overall asset, i.e., individual
pipe segments, or, concrete pour lengths. In the analysis of more discrete structures (service
reservoirs, pipe bridges or valve houses), the entire structure is considered. The exception to the rule
occurs for masonry aqueducts, where the structure is sub-divided into its main constituent parts for
each span; the supports, the deck and the water carrying element. For each asset element, a
mathematical analysis is performed for all possible failure states, i.e., single and/or multiple element
failures, using a combinatorial reliability analysis which accounts for the associated variable
consequence costs.
Intervention Options and scenarios
Potential intervention options have been defined in the Interventions Module, which identifies a
variety of activities ranging from routine maintenance to special investigations and capital works.
For each intervention, a set of application trigger(s) have also been assumed, together with the unit
cost(s) and production rate(s). A risk reduction factor has also been assumed to represent the
improvement in risk exposure which might be expected from each intervention type. The
intervention triggers are based around a number of standard parameters and threshold values, e.g.,
condition score exceeds (x), or, reliability is less than (y). The values for these thresholds have
been developed initially using an iterative process, which has the objective of matching values to
reasonable intervention levels based on engineering judgement. Additional rules have been included
for some interventions, such that they are either mutually exclusive or mutually inclusive with other
activities, to reflect logistical arrangements that would reasonably be adopted in practice. For
example, core sampling in tunnels and conduits could only take place if condition inspections are
also being undertaken.
Whilst the Interventions Module has been used to capture these details, the analysis of interventions
has been performed in the QRA framework. This brings together the outputs from each of the
reliability and consequence models and associates these with the intervention triggers at a segment
level, in order to calculate the corresponding intervention quantities and costs. The model also
calculates the degree of risk reduction associated with each intervention, enabling the cost vs.
benefit to be identified. The model has been structured so that intervention thresholds can be
adjusted, allowing the balance between different options to be modified and/or the total quantum to
be changed to suit overall investment limitations.
Conclusion
A number of tools have been produce to support a fully integrated approach in the developed of
asset management plans for water infrastructure assets. The approach is founded on a Quantified
Risk Analysis methodology which uses mathematical and statistical modelling to calculate the
probability of failure for each asset considering a set of principle failure modes and the ensuing
consequences. This enhanced understanding of asset risk has been successfully used to define
optimised operational and maintenance regimes by a number of UK water utility companies striving
to find new and innovative ways to better understand risk and the measures they can take to better
mitigate these risks.
REFERENCES
Block, P., Ciblac, T., Ochsendorf, J., 2006. Real-time limit analysis of vaulted masonry buildings.
Computers & Structures 84, 1841-1852.
CSS Bridges Group, 2002. Bridge Inspection Reporting: Guidance Note on Evaluation of Bridge
Condition Indicators.pdf Volume 2.
CSS Bridges Group, 2004. Bridge Inspection Reporting: Guidance Note on Evaluation of Bridge
Condition Indicators Addendum t.
Davis, P., Burn, S., Moglia, M., Gould, S., 2007. A physical probabilistic model to predict failure
rates in buried PVC pipelines. Reliability engineering & systems safety 92, 1258-1266.
Davis, P., Marlow, D., 2008. Asset Management: Quantifying Economic Lifetime of Large
Diameter Pipelines. Journal of AWWA 100, 110-119.
Egerton, A.., 1996. Achieving reliable and cost effective water treatment. Water science and
technology 33, 143-149.
Engelhardt, M., Skipworth, P., Savic, D. a., Cashman, a., Walters, G. a., Saul, a. J., 2002.
Determining maintenance requirements of a water distribution network using whole life
costing. Journal of Quality in Maintenance Engineering 8, 152-164.
Fagan, D.M., 1992. Finite Element Analysis: Theory and Practice. Prentice Hall, Harlow.
Harvey, B., 2001. Thrust line analysis of complex masonry structures using spreadsheets, in:
Historical Constructions. pp. 521-528.
Jarrett, R., Hussain, O., Veevers, A., van der Touw, J., 2001. A review of asset management models
available for water pipeline networks, in: International Conference of Maintenance Societies.
Melbourne.
Kaplan, S., Garrick, B.J., 1981. On The Quantitative Definition of Risk. Risk Analysis 1, 11-27.
Kleiner, Y., Rajani, B., Sadiq, R., 2006. Modelling deterioration and managing failure risk of buried
critical infrastructure, in: Sustainable Infrastructure Techniques. pp. 1-13.
Martin, T., 2005. Modelling System Leverages GIS to Assess Critical Assets. WaterWorld 21.
Mogila, M., Davis, P., Burn, S., 2008. Strong exploration of a cast iron pipe failure model.
Reliability engineering & systems safety 93, 885-896.
Pollard, S.J.T., Strutt, J.E., Macgillivray, B.H., Hamilton, P.D., Hrudey, S.E., Global, C., 2004.
RISK ANALYSIS AND MANAGEMENT IN THE WATER UTILITY. Waste Management
82, 1-10.
Sadiq, R., Rajani, B., Kleiner, Y., 2004. Probabilistic risk analysis of corrosion associated failures
in cast iron water mains. Reliability Engineering & System Safety 86, 1-10.
Shreir, L.L., Jarman, R.., Burstein, G.., 1994. Corrosion Vol 1 – Metal/Environment Reactions.
Butterworth-Heinemann, Oxford.
UKWIR, 2002. Capital Maintenance Planning: A Common Framework (02/RG/05/3).
UKWIR, 2011. Deterioration rates of long- life, low probability of failure assets: project report.
Ugarelli, R., Venkatesh, G., Brattebø, H., Federico, V.D., Sægrov, S., 2010. Asset Management for
Urban Wastewater Pipeline Networks. Journal of Infrastructure Systems 16, 112-121.