the benefits and limitations of downscaling large

THE BENEFITS AND LIMITATIONS
OF DOWNSCALING LARGE-SCALE
CATASTROPHE MODELS
Dr. Laurent Marescot
Dr. Stephen Cusack
Dr. Navin Peiris
Director, Model Product Management
Principal Modeller, Model Development
Director, Model Development
Symposion Naturgefahrenmodellierung am Beispiel Österreich - State of the Art
und Erfahrungen aus der Praxis
Österreichische Gesellschaft für Versicherungsfachwissen, Graz (Austria), April 12 , 2013
SCOPE OF THE
PRESENTATION
Understand
...
Explore
...
Investigate
...
... the challenges for downscaling largescale catastrophe models and the
limitations imposed by available data
and technologies
... how suitable solutions are
implemented in catastrophe models,
using the Europe windstorm model as
an example
... how to develop more resilient
catastrophe risk management
strategies with respect to model
assumptions
INTRODUCTION
–
–
–
MODELS OF INSURED LOSSES
USE OF MODEL RESULTS
THE IMPORTANT SPATIAL SCALES OF INTEREST
MODELING EUROPE WINDSTORM AS AN EXAMPLE
Define Events
Hazard Specified
at the Risk Location
Calculate Damage
Quantify Loss
€ Loss
90%
Hazard
Module
Hazard definition:
► 3-second peak gust
► Events rates
Vulnerability
Module
Transforms hazard
into Mean Damage
Ratio
Financial
Module
Transforms Mean
Damage Ratio into
monetary loss
Annual average loss
► set premium
EP curve / Tail risk
► re-insurance coverage
and SII considerations
Annual Probability of
Exceedance
USE OF MODEL RESULTS
Model results: both average
losses and tail risks
Return
Period Loss
(RPLp)
0.02
%
Loss
$1M
SPATIAL SCALES OF
INTEREST
Individual risks
► damage varies between
neighbours
► small-scale turbulence
► street geometry
► local topography (up-slopes)
► proximity of trees
Appropriate for pricing premiums
Portfolio results
► larger scale, more stable
Appropriate to determine capital
requirements and re-insurance
costs
Spatial scale of interest:
down to individual risk
Pictures N. Peiris, RMS
WHAT ARE THE SPATIAL SCALES OF AVAILABLE DATA?
HAZARD
INFORMATION
About 40 years of data from anemometers with 50100km spacing
Measurements of small-scale variability of peak
gusts (<100 m)
► In-depth studies of surface roughness
► Topographic impacts less well developed
► small-scale dynamics (e.g. sting jets) less
understood…
► And no spatially coherent climate data
Wind Speed
high
low
The data are not sufficient to specify hazard
► We need 100’s of years of very high-resolution data
Reconstruction of Daria (1990) with stations included
Market claims
LOSS
INFORMATION
► loss: detailed (postal code, ~1km) or aggregated (per event)
► exposure: usually postal code (~1km)
► usually recent (< 20 yrs)
► variability depending on quality, quantity, age and geography
► available up to a certain wind speed
► occupancy only for Europe windstorm
Number of storms per year with loss data
HOW TO BETTER MEET NEEDS FOR HIGH-RES CAT MODELS?
Challenge* is:
•
Data available to build cat models are:
•
•
•
•
•
Anemometer data capture variability at 100km scale
Hazard variability at smaller scales (<100m)
Limited availability in time (hazard <40yrs, loss<20yrs)
Incomplete (e.g. loss geography, type of risks or wind speed range)
Desired loss results:
•
•
•
Accurate loss estimation down to individual risk (postal code scale (~1-10km) would
be a useful first step)
For full range of exposures, windspeeds and regions
Modelling both annual average losses and tail risks
How can high-res cat models be built?
*This slide is valid fo Europe windstorm, requirements may be different for other perils
DOWNSCALING CAT MODELS
(USING MODELS TO INFORM ON THE DATA VOID)
PART 1: HAZARD
HAZARD – GENERATING STOCHASTIC EVENTS
Numerical Model
General Circulation Model (CAM)
Statistical Model
Reanalysis datasets
(~40yrs wind station observations)
Accurate representation of storm
dynamics (large scale spatial structure)
Effective simulation of storm
occurrence (frequency)
Combine
Solves the issue of producing rare (long
return period) storms with realistic largescale spatial structures – not available
from records.
CAM output severity with calibrated rates
HAZARD – DYNAMICAL DOWNSCALING
CAM is used with 150 km grid-spacing
► Resolving scales ≈ 500km – can capture cyclones
► But ideally would like to resolve much smaller scales
Solution:
► Dynamically downscale using WRF (Weather Research Forecast) grid-spacing of
50km
► Resolving scales of ≈ 200km
► Close to computational limits
HAZARD – DYNAMICAL DOWNSCALING
A snapshot of WRF
output
Strong storm
approaching France
HAZARD – STATISTICAL DOWNSCALING
From WRF model:
► 1000’s of years of storms
► Resolving scales of about 200km
► Outputting 10-minute wind speed
But hazard observations:
► 3-sec peak gusts capture scales of ~ 100m in storms
Still mismatch in scales:
► Dynamical models cannot simulate 100m scales (finite compute resources)
► So, the dynamical models will contain biases
We use statistical downscaling, and calibration
HAZARD – STATISTICAL DOWNSCALING
Input:
WRF 10-minute wind speed (not 3-second peak gust)
Build site coefficient model
Micro-meteorology studies of smallscale variability of winds due to
surface roughness
Satellite data on surface roughness,
~ 100m scale, to model roughness
effects
Roughness (+ other parameters)
allow for transforming 10-min wind
into 3-sec peak gusts
HAZARD – STATISTICAL DOWNSCALING
Input:
WRF 10-minute wind speed (not 3-second peak gust)
Build site coefficient model
Gather all observations of peak gusts; QA
Bring wind data to reference level
Example of anomalous time series
About 40 years
of data from
anemometers
Example of consistent time series
HAZARD – STATISTICAL DOWNSCALING
Wind Speed
Input:
WRF 10-minute wind speed (not 3-second peak gust)
high
medium
low
Build site coefficient model
Gather all observations of peak gusts; QA
Bring wind data to reference level
Fit a statistical model (Extreme Value Theory)
Smooth information
Reference winds at RP 25 years are noisy, mainly due to small sample sizes
for extreme storms. E.g. Lothar can be seen around Paris, 87J in SE England
Smoothing compensates for lack of information on extreme
events at any point
HAZARD – STATISTICAL DOWNSCALING
Example of weather station calibration
Input:
WRF 10-minute wind speed (not 3-second peak gust)
Build site coefficient model
Gather all observations of peak gusts; QA
Bring wind data to reference level
Fit a statistical model (Extreme Value Theory)
Smooth information
Calibration
Adjust hazard to match calibration targets
Re-include site coefficients
Site coefficients contain all information
we have on small-scale (<100km)
variability
Calibration solves the issue of producing
annual average losses consistent with
40yrs wind history
Wind Speed
high
HAZARD – FINAL FOOTPRINT
low
Input:
WRF 10-minute wind speed
Statistical downscaling
(previous slides)
Wind speed footprint
Output:
► Stochastic storm footprints (3-second peak gusts) for
1000’s of years
► Fully consistent with 40 years of observation data
► Stored (aggregated) at RMS Variable Resolution Grid
(VRG) level using site coefficients (1-10 km)
► Upscaling!
Example of Variable Resolution Grid
HAZARD DOWNSCALING SUMMARY
Data:
Anemometer data at 100km scales for past 40 years
Known variability at smaller scales (~100m)
SOLUTION
Desire:
Ideally at individual risks
Realistically postal code (1-10km)
Annual average loss and tail risk
► Statistical + Numerical model (CAM)
► Produce rare (long return period) events
~500 km
► Dynamical downscaling (WRF)
~200 km
► Statistical downscaling (calibration, EVT, site coefficients, ...)
► Calibration produces annual average losses consistent
with 40 years of observation data
► Aggregated at Variable Resolution Grid (VRG)
~1-10 km
DOWNSCALING CAT MODELS
PART2: VULNERABILITY
VULNERABILITY – DOWNSCALING INFORMATION
Data:
Claims with variability depending on quality, quantity,
age and geography representativeness
Desire:
Vulnerability available for full range of exposures,
windspeeds and regions
Eurocode 1 (50-year return period mean wind speed, 2007)
Solution
► Use relativities
► Informed by wind maps, but…
…for engineered constructions (usually not residential)
… many buildings predate emergence of codes
► Impact of valuation methodologies, claim frequency,
local tax laws, materials costs and labor costs
VULNERABILITY – DOWNSCALING REGIONS
Topography
Climatology
Design wind maps
Special regulation covering
Alpine Austria and Switzerland
Evidences for creating
different vulnerability regions
Evidence from claims data
VULNERABILITY – DOWNSCALING OTHER ATTRIBUTES
Adjusted Modeled Function Matching Observed
Other attributes available
► construction type
► year built
► number of stories
Market loss data per
occupancy only
Loss Data
High Intensity Suburban
Low Intensity Suburban
Urban
Rural
Inventory database
disaggregation*
* Khanduri and Morrow, 2003, Vulnerability of buildings to windstorms and
insurance loss estimation, Journal of Wind Engineering and Industrial
Aerodynamics 91, 455–467
DOWNSCALING CAT MODELS
PART 3: LOSS EXPERIENCE
• Perfect match often not achieved
Modelled Losses
COMPARING
OBSERVED AND
MODELLED LOSSES
• Critical points for comparison:
o
Exposure coding
o
o
o
Under – reporting
Loss trending
Model vs. User assumptions
Observed Losses
Despite all efforts to increase model granularity…
« One size does not always fit all »
DOWNSCALING PORTFOLIO INFORMATION – MODEL
ADJUSTEMENT
24 years
Ex.
Lothar = 1/24
Martin = 2/24
...
resilient cat risk management
1/frequency
Open Platform
Adjustment example for illustration
Average Annual Loss
POINTS TO TAKE HOME
POINTS TO TAKE HOME
• Data used to build cat model have variability at very different
space and time scales
• It is possible to define strategies to dowscale information and
benefit from cat models at scale of interest:
→ ideally losses for individual risks, at short and long return periods
• Among main limitations for modelling are lack of knowledge or
data and computing capabilities
• One size does not always fit all: transparency and open
modelling are a step in the direction of resilient cat risk
management