Prediction of Global Earthquake Occurrence Rates for Estimation of

PhD Transfer Report
Prediction of Global Earthquake
Occurrence Rates for Estimation of
Seismic Hazard
Conor McKernon
University of Edinburgh
March 2003
Supervisors:
Advisor:
Prof. Ian Main
Prof. Roy Thompson
Prof. Kathy Whaler
ABSTRACT
While the deterministic prediction of individual earthquakes appears to be an
unrealistic goal at present, forecasting is increasingly becoming a useful tool in the
analysis of short-term seismic hazard. It has long been established that earthquakes
are largely clustered on plate boundaries, but there remains much debate over
earthquake recurrence times. For a given region of high seismicity, where the
likelihood of events at some point in the future is not in question, it may be more
useful to be able to produce a quantitative measure of the probability of an earthquake
occurring within a specified time period. There are several methods that an be applied
to increase our ability to accurately assess seismic hazard, both time-dependant and
time-independent.
In the absence of a strong physical understanding of earthquake processes, it is
necessary to develop empirical and statistical techniques. Frequency-magnitude
graphs clearly show the relationship between numbers of small and large earthquakes,
and Omori’s law describes the relationship of the number of aftershocks with time
after a mainshock. One of the aims of this project is to develop methods to describe
the spatial and temporal distributions of seismicity.
Earthquake-earthquake triggering is well documented , and as such is an important
factor that must be considered in time-dependant seismic hazard analysis. Triggering
has effects on both short-term seismic hazard assessments, in the immediate aftermath
of an mainshock, and also longer term hazard, as recent studies are showing
connections between earthquakes ~10 years apart (Felzer, 2002).However, it can be
difficult to separate this component from the background random (Poisson or timeindependent) process, especially for small databases.
Here I describe a preliminary analysis of this effect in the eastern Mediterranean by
looking at the short-term spatial distribution of seismicity. Statistical analysis of the
results have shown that triggering is restricted to distances of less than 150 km, with
correlation lengths of 5-15 km. Further work will be carried out using the same
method to look at triggering in a range of tectonic settings in a comparative study.
However, given that the major earthquake catalogues cover periods of less than 40
years, larger events are likely to be under-reported. This leads to an under-estimation
of recurrence times, and hence long-term seismic hazard. One possible method to
overcome this problem would be to examine seismic and tectonic moments. Tectonic
forcing has been seen to be very constant over long time periods. Seismic moment
tensors describe the amount of energy released in an earthquake, along with the
source type and orientation. Comparing tectonic moments and summed seismic
moments in plate boundary regions should give an indication of seismic efficiency.
Previous regional analyses will be continued, looking at a wider range of tectonic
settings,
Using both earthquake triggering, and a comparison of seismic and tectonic moments,
this project should lead to an improved estimation of short- and long-term seismic
hazard for these areas.
1
TABLE OF CONTENTS
ABSTRACT .....................................................................................................1
TABLE OF CONTENTS ..................................................................................2
TABLE OF FIGURES ......................................................................................3
1
INTRODUCTION ......................................................................................4
2
EARTHQUAKE SEISMOLOGY ...............................................................8
3
SEISMOTECTONICS .............................................................................16
4
SEISMIC HAZARD AND RISK...............................................................20
5
PRELIMINARY RESULTS .....................................................................23
6
SUMMMARY ..........................................................................................29
7
PLANS FOR NEXT 6 AND 18 MONTHS ...............................................30
8
REFERENCES .......................................................................................32
9
APPENDICES ........................................................................................36
2
TABLE OF FIGURES
Figure 1.1 - Comparison of absolute plate velocities between NNR-NUVEL-1 (based
on 3Ma of palaeomagnetic data) and DORIS (from ~10 years of satellite
geodetic methods) (After Cretuax et al., 1998)......................................................4
Figure 1.2 - Frequency-moment plot for shallow CMT earthquakes (Source: http://
www.seismology.harvard.edu) ..............................................................................5
Figure 1.3 - Comparisons of frequency, magnitude and energy release of earthquakes
and other high-energy phenomena (taken from http://www.iris.edu)....................7
Figure 2.1 - Spatial distribution of seismicity (from CMT catalogue, 1977-2002).......9
Figure 2.2 - The nine different force couples that constitute the moment tensor (after
Aki & Richards, 1980).........................................................................................11
Figure 2.3 - Coulomb stress change around a dextral strike-slip fault (Source,
http://www.usgs.gov)...........................................................................................13
Figure 2.4 - Flinn-Engdahl Seismic regions ................................................................15
Figure 3.1 - Cumulative seismic moment release in the CMT catalogue, 1977-2000.19
Figure 4.1 - Basic steps of probabilistic seismic hazard analysis (after Reiter, 1990) 21
Figure 4.2 - Probabilistic seismic hazard map of the USA (Source,
http://geohazards.cr.usgs.gov/eq).........................................................................22
Figure 4.3 - Seismic hazard map of the Europe (Source: http://www.unesco.org/) ....22
Figure 5.1 - Circular regions and events used in triggering analysis...........................23
Figure 5.2 - Probability density function of random points on the surface of a sphere
(after Lomnitz, 1995). ..........................................................................................24
Figure 5.3 - Histogram of number of pairs of events (raw and surface-area corrected)
..............................................................................................................................25
Figure 5.4 - Histograms of pair separation distances for earthquakes within 2500 km
of Gulf of Corinth, within 30 days of triggering event. .......................................26
Figure 5.5 - Evolution of mean triggering distance as a function of time and area.....27
Figure 5.6 - Evolution of correlation length as a function of time and area ................27
Figure 7.1 - Seismicity in the Gulf of Corinth as recorded by the CRL Project (source:
http://www.corinth-rift-lab.org/index_en.html)...................................................31
3
1
INTRODUCTION
The ability to predetermine aspects of individual earthquakes remains questionable at
best. Most of our knowledge of earthquake seismology is based on observational and
empirical methods. A fundamental problem arising from our reliance on empirical
techniques is the amount of data available. To undertake any reliable study, data must
be consistent, especially in seismology, where raw seismograms must undergo a
significant amount of processing to produce solutions for hypocentres and source
types and orientations. Unfortunately, homogenous earthquake catalogues, where
consistent processing has been used, span a period of time significantly less than
recurrence times for large earthquakes. As it is these large earthquakes that cause the
most damage and present the most risk to human life, it is desirable to be able to try to
constrain how often they occur. Having a better understanding of recurrence times for
large earthquakes will allow long-term seismic hazard to be assessed more accurately,
reducing the risk to human life and infrastructure.
100
DORIS Absolute Rates (mm/yr)
80
60
40
20
0
-20
-40
-60
-80
-100
-100
-80
-60
-40
-20
0
20
40
60
80
100
Nuvel-1 Absolute Rates (mm/yr)
Figure 1.1 - Comparison of absolute plate velocities between NNR-NUVEL-1 (based on 3Ma
of palaeomagnetic data) and DORIS (from ~10 years of satellite geodetic methods) (After
Cretuax et al., 1998)
Comparison of palaeomagnetic data covering several million years shows a strong
correlation with geodetic data covering the last 10 years (Cretaux et al., 1998), as
shown in Figure 1.1. Thus the tectonic forcing which cause most earthquakes is a
remarkably steady process. This should help to place constraints on the background
hazard rate from the largest events which dominate the seismic hazard and energy
release. When stress is released by an earthquake, the event size is best quantified by
4
the seismic moment tensor, which describes the seismic moment released in Newtonmetres, along with the source type and orientation.
Figure 1.2 - Frequency-moment plot for shallow CMT earthquakes (Source: http://
www.seismology.harvard.edu)
Figure 1.2 shows a histogram plotting the frequency-scalar moment relation for
shallow earthquakes (depth < 70 km) from the Harvard Centroid Moment Tensor
(CMT) catalogue. The relationship is given in equation 1.1a
log N = A − M log M 0
(1.1a)
log N = a − bM
(1.1b)
or equivalently
where N here is the number of events in an increment M, M is the magnitude and a
and b are constants dependant on the seismicity of the area being studied. Equation
1.1b is the generalised form of the Gutenberg-Richter law.
The scalar moment is related to the equivalent moment magnitude by
log 10 M 0 = 16 .1 + 1.5 M W
(1.2)
so if b 1, then B 2/3, as seen in Figure 1.2. Earthquakes of a given size can be said
to occur over a characteristic average recurrence time (Molnar, 1979), where small
earthquakes have a short recurrence period, and large earthquakes have a much longer
recurrence time. As small events occur often enough, we have enough events to
5
calculate an accurate recurrence time for them. The upper end of the frequencymagnitude relationships do not always accurately define a maximum earthquake, and
modified frequency-magnitude laws can be more appropriate. Modifying the powerlaw with a gamma distribution can lead to a better description when the sampling
period is not long enough to record the largest earthquakes.
Different tectonic regions will also have differing rates of strain build-up, governed
by relative motions of neighbouring plates. Hence the tectonic forcing for these
regions will differ, as will the characteristic recurrence times. For the largest events,
we can not even be sure that they have been observed in each of these regions within
the period of recording individual events by historical or instrumental methods.
As the lithosphere has a finite maximum strength, in the extreme case where all of the
accumulated tectonic strain is released by one large event the recurrence time will
therefore be maximised, as will the magnitude. This allows a conservative bound to
be placed on magnitude and recurrence time. However, while large events can rupture
faults over a long distance, research has shown (Schwartz, 1999) that large events can
reoccur over relatively short timescales on subduction zones (where the largest
earthquakes are often observed). Faults can re-rupture in areas of slip deficit from
previous large events.
The problem is that while large earthquakes have occurred during the instrumental
period of about the last 100 years, to start to determine a recurrence time, we need to
observe at least two, and preferably more, large events to provide any statistical
strength to predictions. While palaeoseismology can provide information about large
earthquakes that occurred before instrumental records began, it is dependant on
finding geological evidence, which cannot always be dated accurately enough for
these purposes.
Figure 1.3 shows a comparison between frequency (corresponding to the width of the
shaded area), magnitude and released energy for earthquakes and a range of other
high energy phenomena. It shows that the energy release by the largest recorded
earthquakes (Chile in 1960m with Mw = 9.5 and Alaska in 1964 with Mw = 9.1)
exceeds that of the largest nuclear explosions by nearly an order of magnitude. This
neatly summarises the fact that the largest, though rare, earthquakes release the
majority of seismic energy. The Mw 9.5 1960 Chile earthquake released about 1019 J
of energy, with the total global human annual energy consumption being near 3 * 1021
J, and the average annual energy release from seismicity being 2 * 1017 J (Stein &
Wysession, 2003).
Dealing with smaller events increases the amount of data available, and so also
increases the statistical validity of any empirical relationships observed. Omori’s law
(Omori, 1894) states that the rate of aftershocks triggered by a mainshock decays with
time by an inverse-power law. However recent studies (Parsons, 2002 and Felzer,
2003) suggest primary aftershocks (those triggered directly by a mainshock) in turn
can trigger their own sequences of secondary aftershocks. This means that previous
methods of assessing seismic hazard determined solely on the location of the
mainshock and the expected distribution of primary aftershocks may need to be reexamined.
6
Number of events per year
Figure 1.3 - Comparisons of frequency, magnitude and energy release of earthquakes and
other high-energy phenomena (taken from http://www.iris.edu)
The main aims of this project are;
1)
To improve estimates of long-term hazard by combining tectonic and seismic
information (Figures 1.1 and 1.2) and,
2)
To improve estimates of short-term hazard by quantifying the probability of
large triggered events.
In the latter we distinguish between traditional ‘aftershocks’ that are much smaller
than the mainshock, and ‘triggered’ events that are comparable in magnitude.
7
2
EARTHQUAKE SEISMOLOGY
2.1
EARTHQUAKE DISTRIBUTIONS
2.1.1 SPATIAL DISTRIBUTIONS
Since the introduction of standardised global seismometer networks (WWSSN)
during the 1960’s, the quality of earthquake locations have greatly improved, with
more lower magnitude earthquakes being located. This has helped us to achieve a
better understanding of how earthquakes are distributed around the world, which in
turn has increased our understanding of plate tectonics. As can be seen in Figure 2.1,
earthquakes are not distributed randomly around the world, but rather tend to cluster
near well defined plate boundaries and fault zones. Looking at the distribution of
depths also allows subduction zones to be distinguished from spreading centres and
conservative plate boundaries. Deeper earthquakes can clearly be seen around the
Pacific rim, delineating subduction zones. These areas may be of special interest at a
later stage in the project as subduction zones have been observed to generate the
largest earthquakes, with the two largest, directly observed events (1960 Chile Mw =
9.5, and 1964 Alaska Mw = 9.1) occurring in subduction zones.
On a smaller scale, clustering of aftershocks is generally observed within a distance of
one fault rupture length from the mainshock epicentre. However, the higher than
previously thought number of secondary aftershocks (aftershocks triggered by direct
aftershocks of the mainshock) means this may not necessarily always be true (Felzer,
2003).
2.1.2 TEMPORAL DISTRIBUTIONS
Temporal earthquake clustering occurs over two time-scales, the short-term time-scale
of foreshocks and aftershocks, and the long-term time scale of mainshock occurrence
(Kagan and Jackson, 1991). Short term clustering is the stronger of the two, especially
for small and medium sized earthquakes.
If we are to establish a time-dependant component to earthquake hazard, we first have
to reject the null hypothesis that mainshocks are random. By definition foreshocks
and aftershocks are causally related to a mainshock, but these terms are only defined
in retrospect (the mainshock being the largest magnitude event in a sequence), and
foreshock sequences only occur rarely.
Omori’s law (Omori, 1894) is an empirical relation derived in the 1930’s which
describes aftershock activity,
n=
C
(K + t) P
(2.1)
8
Figure 2.1 - Spatial distribution of seismicity (from CMT catalogue, 1977-2002)
Figure 2.1 - Spatial distribution of seismicity (from CMT catalogue, 1977-2002)
9
where n is the frequency of aftershocks at time t after the mainshock, and K, C and P
are fault-dependant constants with P generally close to 1, although can vary between
0.3 and 2.0 (Helmstetter, 2003). In contrast to foreshock sequences, this rule almost
always can be applied to aftershock sequences. Hence, Omori’s law provides a good
description of short-term temporal clustering. Long-term temporal clustering is less
well understood, and one of the aims of this project is to develop methods to try to
determine typical recurrence rates for mainshocks, specifically large ones.
2.2
FREQUENCY-MAGNITUDE RELATIONS
The Gutenberg-Richter (G-R) law, one of the longest observed empirical relationships
in seismology, and is a power-law relating the number of earthquakes to magnitude.
The general form of the G-R law is given in Equation 1.1b, where the constant b is
commonly known as the b-value. The b-value describes the ratio of large-to-small
earthquakes, with more large events leading to a lower b-value, and vice versa. Large
b-values are observed where earthquake swarms occur, for instance seismicity
triggered by magmatic intrusions, resulting in a large number of small events with no
characteristic large event.
A simple power-law does not always describe the frequency-magnitude distribution
adequately, especially in terms of larger events. A gamma distribution can adjusts the
distribution to more accurately describe the rate of occurrence of larger events, given
that there will be a finite maximum magnitude (Main, 1995 and Main et al., 1999a).
In fact, the G-R law is actually just a special case of the gamma distribution
In some cases in may also be more appropriate to fit two power-laws to the
distribution, using the method described by Main (1999b, 2000) which penalises extra
parameters to quantifiably determine the best model.
2.3
EARTHQUAKE SOURCE PARAMETERS
2.3.1 SEISMIC MOMENT TENSOR
The scalar moment (in Newton-metres) of an earthquake is given by
M 0 = µDA
(2.2)
where µ is the shear modulus (Nm-2), D is the averaged fault displacement (m) and A
is the fault rupture area (m2). The unit moment tensor describes the source type and
orientation, and is of the form
M 11
M 12
M 13
M = M 21
M 31
M 22
M 32
M 23
M 33
(2.3)
10
Each element of the tensor describes the normalised strength of nine force-couples,
the relative orientations of which are shown in Figure 2.2.
The Seismic Moment Tensor is then defined to be the product of the scalar moment
and the unit moment tensor. The scalar moment also allows the Moment magnitude
(Mw) to be calculated, which more accurately describes the size of larger earthquakes
than other magnitude scales. Moment magnitude (Mw) is defined by
MW =
2
log M 0 − 10.73
3
(2.4)
where M0 is the scalar seismic moment.
Figure 2.2 - The nine different force couples that constitute the moment tensor (after Aki &
Richards, 1980)
11
2.4
STRESS CHANGES
Earthquakes occur as a result of stress being built up, usually by tectonic forcing, and
exceeding the shear strength of a fault. When the fault ruptures, the stress is released
rapidly and redistributed in the region surrounding the fault. Stress changes are either
dynamic, meaning the stress changes associated with the transmission of energy as
seismic waves, or static, meaning the constant background stress. Static stress can
increase or decrease following an earthquake, and the changes in static stress can be
mapped out. (refs for coulomb stress).
Although most triggering is believed to be due to static stress changes, dynamic
stresses can also act as a triggering mechanism (Gomberg et al., 1997 and 1998).
However, static changes and their correlations to the spatial distribution of aftershocks
are easier to quantify however and so most studies are associated with looking
(Kagan, 1994, King et al., 1994, Toda et al., 1995, Stein et al., 1997, Harris, 1998,
Harris and Simpson, 1998, Kagan and Jackson, 1998 and Nalbant et al., 1998,)
Stress changes play a crucial role in earthquake triggering. Changes in stress
(dynamic and static) cause aftershocks, which can be broadly classified as one of two
types (Dieterich, 1994). The first is further earthquakes on the main fault, where
patches of the fault that did not slip during the mainshock are exposed to large stress
increases, resulting in further slip. The second is where nearby faults readjust to the
change in static stress.
It is possible to quantify static stress changes after a fault rupture by calculating the
Coulomb stress. This is a combination of normal and shear static stress changes, and
is defined as
σ f = τ + µσ
where f is the Coulomb failure stress, is the shear stress,
coefficient of friction and is the normal stress.
(2.5)
is the apparent
Figure 2.3 shows the Coulomb stress change around a dextral strike-slip fault. Red
areas indicate regions of increased stress where more aftershocks would be expected,
and blue regions indicate regions of decreased stress, where we might expect to see
fewer aftershocks. Faults in the red area could be said to have undergone a clockadvance, where future earthquakes have been brought forward due to the increased
stress, and similarly the stress drop areas will have undergone a clock-delay, where it
will now take longer for stress to build up to a level where it can rupture a fault.
Recent studies have suggested that aftershocks are not confined to the classical
aftershock zone of stress increase (Parsons, 2002).
12
Figure 2.3 - Coulomb stress change around a dextral strike-slip fault (Source,
http://www.usgs.gov)
Pore pressure is also an important factor in stress distribution. Beeler (Beeler et al.,
2000) suggests that Coulomb stress change calculations using only an apparent
coefficient of friction to incorporate both friction and pore pressure behaviours may
be incorrect. He states that “measurements of in-situ permeability and porosity and
monitoring of pore pressure within active faults during the seismic cycle are needed”.
If the recent suggestions that the failure of Coulomb stress calculations to accurately
predict areas of increased seismicity, then more studies into correlations between
pore-pressure and seismicity may be of benefit in refining this technique.
Several studies have attempted to quantify the area over which triggering takes place.
Gasperini et al. (1989) looked at spatial and temporal clustering in Italy and found
that triggering mainly took place in an influence region of 14 to 60 days and 80 to 140
km. Lomnitz (1996) reported that for large earthquakes (M > 7), mainshocks are
surrounded by aftershocks at local distances, and triggered seismicity at large
distances (300 to 1000 km), separated by a zone of seismic quiescence.
2.5
EARTHQUAKE CATALOGUES
2.5.1 INTERNATIONAL SEISMOLOGICAL CENTRE
The International Seismological Centre (ISC) provides a database of global
earthquake hypocentral solutions covering the time period 1964 - present.
Approximately 1,000,000 events are available from the database. The homogeneity of
13
the catalogue and the large number of events makes it suitable for global and regional
studies concerned with spatial and temporal distributions of seismicity.
2.5.2 HARVARD CENTROID MOMENT TENSOR CATALOGUE
The Harvard Centroid Moment Tensor (CMT) Catalogue contains around 15,000
events with magnitudes greater than about 5.5 Mw, determined using the method
described by Dziewonski (Dziewonski et al., 1981) complete only since ~1978. As its
name suggests it provides moment tensor solutions, making it suitable for global and
regional studies involving comparisons between seismic and tectonic moment tensors
within a given time period. The CMT catalogue also has the advantage that Moment
magnitudes can be calculated from provided scalar moments for all of its events,
allowing more accurate differentiation between larger earthquakes.
2.5.3 OTHER SOURCES OF DATA
These two catalogues are not the only earthquake catalogues by any means, but are
rather the two largest and most homogenous, making them most suitable for global
studies. Several other sources of global earthquake hypocentres include the United
States Geological Survey’s National Earthquake Information Centre (NEIC), and
Preliminary Determination of Epicentres (PDE) databases, as well as the Prototype
International Data Centre (PIDC).
On a smaller scale, many regional and local studies have also been generated from
temporary networks of seismometers. The method described in the next chapter
involving analysis of earthquake triggering could be applied to these regional
catalogues. When data from the Corinth Rift Laboratory seismic network becomes
available, an analysis of triggering over a much smaller region could be performed at
a later stage of the project. In addition the downhole pore pressure changes can be
examined for their response to local earthquakes.
2.6
FLINN-ENGDAHL REGIONALISATION
When dealing with global seismicity, it is beneficial to have an objective system to
allow results from different studies to be compared. The Flinn-Engdahl
regionalisation scheme split the world into separate zones which has been accepted as
convention in the academic community. This allows results achieved as part of this
project to be compared to published data.
The regionalisation scheme was proposed in 1965 (Flinn & Engdahl, 1965), defined
in 1974 and was revised in 1995 (Young et al, 1996). It divides the world into 50
seismic regions, which correlate closely with different tectonic settings, and 757
geographic regions, which are used mainly for notation in real-time earthquake
bulletins. The 50 zones can be categorised as containing the three different main types
of plate boundaries (mid-ocean ridges, subduction and transform zones) and intraplate regions. The main advantage of this regionalisation is that it is fixed before any
14
regional study, thereby avoiding retrospective selection bias in any spatio-temporal
statistical study.
Figure 2.4 - Flinn-Engdahl Seismic regions
15
3
SEISMOTECTONICS
3.1
PLATE MOTIONS
Since the Vine-Matthews-Morley hypothesis of sea-floor spreading (Vine and
Matthews, 1963) as a mechanism to explain magnetic polarity reversals in oceanic
crust in the 1960’s, the lateral motion of tectonic plates have been well determined
through a variety of methods. Initially, these motions were believed to be caused by
convection cells in the mantle imparting a force on the underside of the lithosphere,
with ‘ridge push’ (caused by the accretion of new oceanic crust at mid-ocean ridges)
and ‘slab pull’ (caused by the gravity driven sinking of colder denser oceanic
lithosphere into the mantle at subduction zones) being secondary consequences.
However, theses forces were later believed to be the main factors, with slab pull being
the now favoured dominant factor.
As the plates are effectively rigid bodies on the surface of a sphere, their motion can
be described as rotation around a fixed pole, known as a Euler pole. So, any absolute
plate motion can be expressed as an angular velocity about an Euler pole.
Taking an arbitrary origin point, the relative velocities of each of the other plates can
be described, and relative spreading or collision rates at plate boundaries can be
calculated. The recent model of best-fitting plate velocities is the NUVEL-1A model
(DeMets, 1994).
However, it is often desirable to describe plate velocity in terms of a linear velocity.
This can be done only when dealing with areas small enough where the angular
rotation vector can be approximated by a linear velocity vector. This is useful when
describing relative velocities across a plate boundaries to express plate separation or
closure rates. The linear velocity v for a point on the plate can be derived from the
equation
v = ωr
(3.1)
where is the angular velocity of the plate and r is the minimum distance to the axis
of rotation of the plate.
3.1.1 PALAEOMAGNETIC DATA
By measuring the positions of seafloor magnetic anomalies, which correspond to
geomagnetic reversals, and the age of the oceanic lithosphere, it is possible to
calculate spreading rates at mid-ocean ridges. These anomalies cover data averaged
over 3 Ma, and the average rates are very similar to those measured by geodetic
methods over the past 10 years (as shown in Figure 1.1). Analysis of magnetic
anomalies across mid-ocean ridges was used by Vine and Matthews (Vine &
16
Matthews, 1963) to hypothesis sea-floor spreading. which in turn led to the
acceptance of plate tectonics.
3.1.2 SPACE GEODETIC METHODS
A number of methods exist which allow very accurate measurements of plate
velocities. Some of the first to be developed include Very Long Baseline
Interferometry (VLBI) and Satellite Laser Ranging (SLR).
The first involved measuring the lag in radio signals from astronomical sources
measured by radio telescopes on different plate. Repeat surveys allowed the rate of
change of position between a pair of receivers to be determined. SLR techniques
(Smith et al., 1990) involves bouncing laser signals off purpose built satellites, which
are covered in reflectors. The positions of the ground stations with respect to one
another can then be established. Although accurate methods, relative velocities could
be measured only between ground stations, of which there was a scarcity. This meant
that relative velocities could only be established between 4 plates, Eurasia, North
America, Australia and Pacific.
The addition of GPS (Global Positioning System) and satellite tracking measurements
has led to the inclusion of a further 5 plates: South America, Africa, India, Nazca and
Antarctica (Larson et al., 1997 and Cretaux et al., 1998). GPS has proven to be a more
adaptable method, with receivers being portable enough to allow denser networks of
measuring sites to be established.
Comparisons of the two methods (Larson et al., 1997), show plate velocities to be
remarkably constant over time - thus tectonic forcing for earthquakes is also constant.
3.1.3 HOTSPOTS
Hotspots, small, long-lasting, and exceptionally hot regions in the mantle, provide
another method for constraining plate motions. Assuming that hotspots remain
relatively stable in the mantle, increased volcanism occurs at the surface as
lithospheric plates move over the hotspots, as the increased heat provides a persistent
source of magma. As the lithosphere moves with respect to the underlying mantle, a
trail of volcanism is left at the surface. In oceanic regions, this is seen as a chain of
islands, for example the Hawaiian seamounts, with the oldest islands and rocks
furthest from the present hotspot location. Although this is less accurate than the
palaeomagnetic method, it spans a longer period of time nearly twice as long (Gripp
and Gordon, 2002).
3.2
SEISMIC EFFICIENCY
Seismic efficiency describes the rate of release of seismic moment divided by the rate
of build-up of tectonic moment (equation 3.2).
17
η=
dM seis / dt
dM tec / dt
(3.2)
where is the seismic efficiency, Mseis is the cumulative seismic moment and Mtec is
the total tectonic moment. The cumulative seismic moment is computed by summing
moment tensors for individual earthquakes from the CMT catalogue. The tectonic
moment is derived from plate velocity, thickness and rigidity. Bowers and Hudson
(1999) defined the total scalar moment release for an area in which there is negligible
volumetric deformation as
dε
dM
dε
dε
= 2µV max | xx |, | yy |, | zz |
dt
dt
dt
dt
meaning that horizontal strain rates (from regional GPS surveys) allow
determined.
(3.3)
to be
Earthquake recurrence period are discussed by Molnar (1979). He derives a relation
for recurrence rates of seismic moments from the Gutenberg-Richter formula
(Equation 1.1b) and the generalised form of the moment-magnitude relation (Equation
1.2). The derived relation is
N ( M 0 ) = αM 0
−β
(3.4)
where
α = 10
a+
bd
c
(3.5)
and
β = b/c
(3.6)
In Equations 3.5 and 3.6, the constants b, c and d are quite well determined(b is the bvalue from the G-R law, and c and d are constants from the generalised momentmagnitude relation). Knowing average slip rates on faults, Equation 3.4 allows the
expected recurrence rates for events of different seismic moment to be calculated.
Over a long enough timescale, seismic efficiency should always be equal to or less
than 1. However, taken over a short period of time, an area may show an efficiency of
greater than 1. Although appearing nonsensical, it is an effect of choosing a period
containing a large earthquake (and the seismic moment release associated with it)
which has a characteristic repeat time greater than the observation period. This is a
problem with larger earthquakes, in that they release a large proportion of total
seismic moment, but typically reoccur over periods greater than the span then
instrumental records. This can be seen in Figure 3.1, which shows that cumulative
seismic moment release for CMT earthquakes over the period 1976-2002 is
dominated by large earthquakes.
18
< 9.9 Mw
9.0E+29
< 8.0 Mw
8.0E+29
< 7.6 Mw
7.0E+29
6.0E+29
< 7.2 Mw
< 6.8 Mw
5.0E+29
< 6.4 Mw
< 6.0 Mw
4.0E+29
3.0E+29
2.0E+29
1.0E+29
19
88
17
/0
7/
19
90
24
/0
7/
19
92
23
/0
7/
19
94
17
/0
2/
19
96
26
/1
0/
19
97
09
/0
6/
19
99
15
/1
1/
20
00
03
/0
5/
20
02
6
19
8
11
/0
9/
19
84
12
/1
1/
19
82
08
/1
0/
19
79
09
/0
8/
da
te
0.0E+00
27
/1
1/
Seismic Moment (dyne-cm)
1.0E+30
Date
Figure 3.1 - Cumulative seismic moment release in the CMT catalogue, 1977-2000
19
4
SEISMIC HAZARD AND RISK
4.1
DEFINITIONS
Seismic hazard and risk are two terms that are often used interchangeably, but there
are crucial and fundamental differences between the two that are often overlooked.
Reiter (1990) defines seismic hazard as “the potential for dangerous, earthquakerelated natural phenomena such as ground shaking, fault rupture, or soil liquefaction.
These phenomena could result in adverse consequences to society such as the
destruction of buildings or the loss of life”. Seismic risk is defined as “the probability
of occurrence of these consequences”.
So, seismic hazard is the intrinsic natural danger associated with earthquakes and
earthquake-related processes, such as ground-shaking, landslides and tsunamis. It is
determined by the largest, most damaging earthquake (a ‘maximum possible
earthquake’) that could occur in a specified area, and is based solely on geological
factors. Maximum earthquakes (Reiter, 1990) can also be described as maximum
credible earthquakes, which are a more reasonable, but less conservative
(conservative in seismic terms means larger rather than smaller) estimate which takes
present day tectonics into account. Seismic hazard cannot be reduced by human
intervention.
Seismic risk on the other hand is the danger the hazard poses to life and property. It is
the probable building damage that would occur if a maximum earthquake occurred.
Hence hazard can be described as an unavoidable geological fact which cannot be
affected by human intervention, while risk can be reduced through human actions,
such as the implementation of building codes and earthquake preparedness. The
implementation of risk reduction measures depends on an accurate assessment of
seismic hazard, which in turn depends on our understanding of the spatial and
temporal distribution of earthquakes.
4.2
DETERMINATION OF SEISMIC HAZARD
Seismic hazard is determined by a several factors. Reiter identifies four steps in the
analysis of probabilistic seismic hazard (Figure 4.1). Step 1 is “the definition of
earthquake sources”. Sources are nearby active faults, each defined as being of
uniform earthquake potential. Step 2 is “the definition of seismicity recurrence
characteristics”, or in other words the frequency-magnitude distribution. A maximum
credible earthquake is chosen for each source. The third step is estimation of the
earthquake effect, taking attenuation, peak acceleration and distance to the source into
account. The final step determines the hazard at the site, quantifying the probability of
exceeding different levels of ground motion at the site during a specified period of
time.
20
Figure 4.1 - Basic steps of probabilistic seismic hazard analysis (after Reiter, 1990)
Seismic hazard maps give the upper limit of ground acceleration that can be expected
As such, knowing recurrence times for large earthquakes is again an important factor
in the estimation of seismic hazard. Simply knowing the maximum possible
earthquake is not in itself sufficient to allow an accurate estimation. For example,
while palaeoseismology may provide evidence of a magnitude 9 earthquake 500 years
ago, the recurrence time, dictated in part by the regional tectonic setting, may be
several thousand years. Hence a seismic hazard assessment giving probabilities of
exceeding a certain level of ground-shaking may not be
Seismic hazard maps for the United States and Europe are shown in Figures 4.2 and
4.3 respectively. As would be expected, the regions of highest seismic hazard are
closely associated with regions undergoing active tectonic deformation.
21
Figure 4.2 - Probabilistic
http://geohazards.cr.usgs.gov/eq)
seismic
hazard
map
of
the
USA
(Source,
Figure 4.3 - Seismic hazard map of the Europe (Source: http://www.unesco.org/)
22
5
PRELIMINARY RESULTS
5.1
EARTHQUAKE - EARTHQUAKE TRIGGERING
As part of an EU funded project to study the Corinth Rift area (http://www.corinthrift-lab.org/index_en.html), the method described by Huc and Main (Huc & Main,
2003) was adapted for application to the ISC catalogue, with the larger number of
events allowing a regional study of triggering centred on Greece.
Data was extracted from the ISC catalogues in 5 circular regions, with increasing
epicentral distances of 500, 1000, 1500, 2000 and 2500 km, as shown in Figure 5.1.
Figure 5.1 - Circular regions and events used in triggering analysis
23
Circular regions were initially chosen to minimise any edge effects, but results
showed that the shape of the study area was not important, as long as it was
significantly larger than the limit to which a triggering signal was observed. As can be
seen from Figure 5.1, seismicity is clustered around Greece, the Aegean Sea and
Western Turkey, as well as the Iraq-Iran border, Italy and west of the Caspian Sea.
Using a null hypothesis of spatially clustered, temporally random earthquake
occurrence, it is possible to look for typical triggering times and distances in
earthquake triggering.
For a given dataset, each earthquake is treated as a potentially triggering event, and
every subsequent event a potentially triggered event. Times and distances between
each pair of earthquakes are calculated, and histograms of numbers of pairs occurring
with a distance range (r,r+dr) are generated, with bin sizes of 5 km. This minimum bin
size is determined by the accuracy of the determination of earthquake epicentres.
Different time limits are used to study the effects of triggering with time after
triggering event.
First the number of pairs in each bin is normalised to the area. This is because the
surface area on each annulus bounded in the range (r,r+dr) increases with epicentral
distance r, and so the probability of finding an earthquake in one of these annuli
increases also. The theoretical probability density function of spatially random points
on the surface of a sphere derived by Lomnitz (Lomnitz, 1995) is shown in Figure 5.2.
1.0
0.9
f ( x ) = K sin( x )e cos( x )
0.8
0.7
f (x)
0.6
0.5
0.4
0.3
0.2
0.1
0.0
0
20
40
60
80
100
120
140
160
180
x (degrees)
Figure 5.2 - Probability density function of random points on the surface of a sphere (after
Lomnitz, 1995).
When the raw and modified data are viewed beside each other, the raw data tends to
have an initial peak which drops off quickly, followed by a slow rise. This
corresponds to the expected clustering near triggering events, superimposed on the
24
curve shown in Figure 5.2. The modified data shows the initial peak and the data then
falling with distance as would be expected. This is shown in Figure 5.3. It should be
noted that the fall-off in the number of pairs of events after about 500 km is due to
edge effects. These do not affect the triggering signal as this is later seen to occur well
within this limit.
12000
Number of pairs of events
10000
Raw data
8000
6000
4000
2000
Area-corrected data
0
0
500
1000
1500
2000
2500
3000
3500
4000
4500
5000
Distance (km)
Figure 5.3 - Histogram of number of pairs of events (raw and surface-area corrected)
However, real seismicity is not spatially random, but instead is clustered in fault zones
and on plate boundaries. To correct for this effect, the same method is applied to
synthetic catalogues. These are created by generating random dates for each
hypocentre, within the time period of the original catalogue. Twenty synthetic
catalogues are created in this way, and are resorted in chronological order, which
effectively shuffles the hypocentres in each one. This preserves the spatial distribution
of seismicity, while deliberately destroying any triggering signal by randomising the
event times.
Histograms are then generated for each synthetic catalogue. These are averaged,
producing a background ‘noise’ we would expect to see if earthquakes were actually
distributed randomly in time. The synthetic catalogues form our null hypothesis. The
histogram of the real data is then compared to the averaged, random one. The
difference in the two should then be the ‘real’ triggering signal, seen above the
background noise.
25
Number of pairs of events
2000
1800
ISC Catalogue
1600
1400
1200
1000
800
20 Time-Randomised Catalogues
600
400
200
0
0
50
100
150
200
Distance (km)
Figure 5.4 - Histograms of pair separation distances for earthquakes within 2500 km of Gulf
of Corinth, within 30 days of triggering event.
These methods were carried out on the 5 areas shown in Figure 5.1, for time periods
ranging from 1 to 1000 days. Each triggering signal was then fitted to a power-law of
the form
P ( r ) = Ae
(− r L )
(5.1)
where A is a scaling constant, r is pair separation and L is the correlation length.
Analysis of each of the study areas showed that triggering is statistically significant up
to ~100 km, which is comparable to lithospheric thickness, and that the triggering
signal decays exponentially.
Figure 5.4 shows the evolution of the mean triggering distance with time and area. As
can be seen, there is little connection between the size of the study area. The mean
triggering distance <r> increases only slowly with time, much slower than Gaussian
diffusion, with an exponent H~1/3 instead of H~1/2
Figure 5.5 shows the correlation lengths for each area and time period. Again,
correlation length does not show a strong dependence on the area chosen, as long as
the area chosen has a characteristic width larger than the distance to which the
triggering signal can be observed above the background seismicity (100 km in this
case). The correlation lengths vary between 7 and 11 km, which agrees well with Huc
and Main (2003), which reported correlation lengths of around 5-15 km.
26
Distance (km)
100
10
500 km radius
1000 km radius
1500 km radius
2000 km radius
2500 km radius
1
0.1
1
10
100
1000
10000
Time (days)
Figure 5.5 - Evolution of mean triggering distance as a function of time and area
15
14
Correlation distance (km)
13
12
11
10
9
500km radius
8
1000km radius
7
1500km radius
2000km radius
6
2500km radius
5
0.1
1
10
100
1000
10000
Time (days)
Figure 5.6 - Evolution of correlation length as a function of time and area
27
Again, the results are not strongly sensitive to the choice of study area, as long as the
study area maximum radius is an order of magnitude greater than the lithosphere
thickness. For smaller study areas, edge effects can lead to an underestimate in the
triggering distance.
28
6
SUMMMARY
As discussed, there are several techniques which allow constraints to be placed on
future earthquake behaviour. While deterministic prediction is not at present
achievable, probabilistic forecasting is a useful tool for seismic hazard analysis.
The direction of this project has changed slightly from the original proposal, with the
emphasis shifted from the comparison of seismic and tectonic moments towards
earthquake-earthquake triggering. This will allow estimates of both short- and longterm seismic hazard to be improved. Estimates of short-term hazard analysis will be
improved by studying earthquake triggering, as discussed in Chapter 5, and
quantifying the probability of large triggered events. Long-term hazard will be
investigated by comparing seismic and tectonic moments. Geodetic measurements
allow components of strain rates to be measured, from which tectonic moments are
calculated.
The recent publication of several papers (Felzer, 2003, and Parsons, 2002) raises the
issue of secondary aftershocks. Previously, it has been put forward that triggered
seismicity (above the background rate) was more likely to occur in areas of Coulomb
stress increase. The change in Coulomb stress is calculated from the geometry of a
fault rupture, and so areas surrounding a fault can then be marked as areas of stress
increase or decrease. It is in these areas of stress increase that primary aftershocks are
classically thought to occur.
These discrepancies may also be related to assumptions made as to the apparent
coefficient of friction when determining the Coulomb stress change. Investigations of
downhole pore-pressure may help to determine apparent coefficients of friction more
accurately, which in turn will allow Coulomb stress changes to be better evaluated.
29
7
PLANS FOR NEXT 6 AND 18 MONTHS
7.1
6 MONTHS
This method described in Chapter 5 is also being applied to other regions around the
world to look for changes in <r> and the correlation lengths in different tectonic
settings.
Tectonic zones around the world were divided into 50 regions by Flinn and Engdahl
(Flinn et al, 1965), and are appropriately known as Flinn-Engdahl (F-E) zones. There
are 50 seismic regions, which can be categorised as containing various classes of
tectonic settings such as mid-ocean ridges, subduction, transform and intra-plate
zones. The 50 seismic regions provide an objective way of comparing results from
previous studies that have used the same method. The ISC catalogue will again be
used, as it provides in the region of 1,000,000 raw events. But, when earthquakes
deeper than 70 km and with a Mb of less than 4.5 have been discarded, approximately
90,000 events remain, distributed unevenly throughout the F-E zones. Unfortunately
some zones do not contain enough hypocentres to perform a statistically significant
analysis, so it will not be possible to apply the method described in Chapter 5 to all
the regions.
Work has also been carried out to optimise the method to shorten the somewhat
lengthy processing times required, with significant improvements achieved. So far 11
F-E zones, each containing at least 3,000 earthquakes, have been processed using the
method described.
When all the results have been processed, regression analysis will be performed on
them, and it will be possible to compare the results to other values such as b-values or
relative plate velocities across boundaries.
Some further work is being carried out on applying the method to volcanic seismicity
around large volcanoes such as Mauna Loa and the Cascades. Initial discussions
suggest that the statistical analysis involved in interpreting the results may also be
refined.
7.2
18 MONTHS
It is also hoped that in the near future seismic data from the CRL project will become
available, allowing an analysis of a catalogue with smaller magnitude events (Figure
7.1). This will make it possible to see if smaller magnitudes affects the distance and
amplitude of the triggering effect seen on a larger scale.
30
Figure 7.1 - Seismicity in the Gulf of Corinth as recorded by the CRL Project (source:
http://www.corinth-rift-lab.org/index_en.html)
A comparison of seismic and tectonic moment release rates will also be carried out, as
discussed in Chapter 3. Given that this method is best suited to collision zones,
possible regions for study include the India-Asia continental collision, the NazcaSouth America subduction zone, and also the Mediterranean. The method could
equally be applied to transform zones, such as the San Andreas Fault. These last two
regions in particular have well established geodetic monitoring networks, allowing a
more detailed study of continental deformation, as opposed to using an approximated
linear velocity derived from plate kinematic models such as NUVEL-1A. Previous
studies (Bastow, 2001) which focused on the North Pacific would be complemented
by this work.
Another possible aspect of earthquake triggering that could be investigated is
Coulomb stress triggering. Coulomb stress is derived from analysing the change in
static shear and normal stresses after an earthquake. It can then be said that further
seismicity is more likely to occur in zones where the Coulomb stress has increased,
although recent publications have cast doubt on such a simple view.
31
8
REFERENCES
Aki, K. & Richards, P.G. (1980), Quantitative Seismology, Freeman
Bastow, I.D. (2001), A Comparison of Seismic Moment Tensors to Tectonic
Deformations in the North Pacific, Unpublished Manuscript, University of Edinburgh
Beeler, N.M., Simpson, R.W., Hickman, S.H. & Lockner, D.A. (2000), Pore fluid
pressure, apparent friction, and Coulomb failure, J. Geophys. Res. 105, 25,533-25,542
Bowers, D & Hudson, J.A. (1999), Defining the scalar moment of a seismic source
with a general moment tensor, Bul. Seismol. Soc. Am. 107, 937-959
Cretuax, J.-F., Soudarin, L., Cazenava, A. & Bouille. F. (1998). Present-day tectonic
plate motions and crustal deformation from the DORIS space system, J. Geophys.
Res. 103, 30,167-30,181
Caputo, M. (2000), Comparison of five independent catalogues of earthquakes of a
seismic region, Geophys. J. Int. 143, 417-426
DeMets, C., Gordon, R.G., Argus, D.F. & Stein, S. (1990), Current plate motions,
Geophys. J. Int. 101, 425-478
Dieterich, J.H. (1994), A constitutive law for rate of earthquake production and
application to earthquake clustering, J. Geophys. Res. 99, 2,601-2,618
Dziewonski, A.M., Chou, T.-A. & Woodhouse, J.H. (1981), Determination of
Earthquake Source Parameters From Waveform Data for Studies of Global and
Regional Seismicity, J. Geophys. Res. 86, 2,825-2,852
Felzer, K.R., Becker, T.W., Abercrombie, R.E., Ekstrom, G. & Rice, J.R. (2002),
Triggering of the 1999 Mw 7.1 Hector Mine earthquake by aftershocks of the 1992
Mw 7.3 Landers Earthquake, J. Geophys. Res. 107, xxxxx-xxxxx
Felzer, K.R., Abercrombie, R.E. & Ekstrom G. (2003), Secondary aftershocks and
their importance for aftershock forecasting, J. Geophys. Res., In Press
Flinn, E.A. & Engdahl, E.R. (1965), A proposed basis for geographical and seismic
regionalisation, Rev. Geophysics. 3, 123-149
Gasperini, P. & Mulargia, F. (1989), A statistical analysis of seismicity in Italy: the
clustering properties, Bul. Seismol. Soc. Am. 79, 973-988
Gomberg, J., Blanpied, M.L. & Beeler, N.M. (1997), Transient Triggering of Near
and Distant Earthquakes, Bul. Seismol. Soc. Am. 87, 294-309
32
Gomberg, J., Beeler, N.M, Blanpied, M.L. & Bodin, P. (1998), Earthquake triggering
by transient and static deformation, J. Geophys. Res. 103, 24,411-24,426
Gripp, A.E. & Gordon, R.G. (2002),Young tracks of hotspots and current plate
velocities, Geophys. J. Int. 150, 312-361
Harris, R.A. (1998). Introduction to special section: Stress triggers, stress shadows,
and implications for seismic hazard, J. Geophys. Res. 103, 24,347-24,358
Harris, R.A. & Simpson, R.W. (1998), Suppression of large earthquakes by stress
shadows: A comparison of Coulomb and rate-and-state failure, J. Geophys. Res. 103,
24,439-24,451
Helmstetter, A. & Sornette, D. (2002), Diffusion of Earthquake Aftershock Epicentres
and Omori’s Law: Exact Mapping to Generalized Continuous-Time Random Walk
Models, Phys. Rev. E 66 (6) art. no. 061104
Huc, M. & Main, I.G. (2003), Anomalous stress diffusion in earthquake triggering:
correlation length, time-dependance, and directionality, In Press
Jackson, J. & McKenzie, D. (1988), The relationship between plate motions and
seismic moment tensors, and the rates of active deformation in the Mediterranean and
Middle East, Geophysical Journal 93, 45-73
Kagan, Y.Y. & Knopoff, L. (1981), Stochastic synthesis of earthquake catalogs, J.
Geophys. Res. 86, 2853-2862
Kagan, Y.Y. & Jackson, D.D. (1991), Long-term earthquake clustering, Geophys. J.
Int. 104, 117-133
Kagan, Y.Y. (1994), Incremental stress and earthquakes, Geophys. J. Int. 117, 345364
Kagan, Y.Y. & Jackson, D.D. (1998), Spatial aftershock distribution: Effect of normal
stress, J. Geophys. Res. 103, 24,453-24,467
Kagan, Y.Y. & Jackson, D.D. (2000), Probabilistic forecasting of earthquakes,
Geophys. J. Int. 143, 438-453
Karakaisis, G.F., Papazachos, C.B., Savvaidis, A.S. & Papazachos, B.C. (2002),
Acclerating seismic crustal deformation in the North Aegean Trough, Greece,
Geophys. J. Int. 148, 193-200
King, G.C.P., Stein, R.S. & Lin, J. (1994), Static Stress Changes and the Triggering of
Earthquakes, Bul. Seismol. Soc. Am. 84, 935-953
Larson, K.M., Freymueller, J.T. & Philipsen, S. (1997), Global plate velocities from
the Global Positioning System, J. Geophys. Res. 102, 9,961-9,981
33
Lay, T. & Wallace, T.C. (1995), Modern Global Seismology, Academic Press
Lomnitz, C. (1995), On the Distribution of Distances between Random Points on a
Sphere, Bul. Seismol. Soc. Am. 85, 951-953
Lomnitz, C. (1996), Search of a Worldwide Catalog for Earthquakes Triggered at
Intermediate Distances, Bul. Seismol. Soc. Am. 86, 293-298
Main, I.G. (1995), Earthquakes as Critical Phenomena: Implications for Probabilistic
Seismic Hazard Analysis, Bul. Seismol. Soc. Am. 85, 1299-1308
Main, I., Irving, D., Musson, R. & Reading, A. (1999a), Constraints on the frequencymagnitude relation and maximum magnitudes in the UK from observed seismicity
and glacio-isostatic recovery rates, Geophys. J. Int. 137, 535-550
Main, I.G., Leonard, T., Papasouliotis, O., Hatton, C.G. & Meredith, P.G. (1999b),
One Slope or two? Detecting statistically significant breaks of slope in geophysical
data, with application to fracture scaling relationships, Geophysical Research Letters
26, 2,801-2,804
Main, I. (2000), Apparent Breaks in Scaling in the Earthquake Cumulative
Frequency-Magnitude Distribution: Fact or Artifact?, Bul. Seismol. Soc. Am. 90, 8697
Molnar, P. (1979), Earthquake recurrence intervals and plate tectonics, Bull. Seismol.
Soc. Am. 69, 115-133
Nalbant, S.S., Hubert, A. & King, G.C.P. (1998), Stress coupling between
earthquakes in northwest Turkey and the north Aegean Sea, J. Geophys. Res. 103,
24,369-24,486
Omori, F. (1894), On the aftershocks of earthquakes, J. Coll. Sci. Imp. Univ. Tokyo 7,
111-120
Parsons, T. (2002), Global Omori law of triggered earthquakes: Large aftershocks
outside the classical aftershock zone, J. Geophys. Res. 107, art. no. 2199
Reiter, L. (1990), Earthquake Hazard Analysis, Columbia University Press
Scholz, C.H. & Gupta, A. (2000), Fault interactions and seismic hazard, Journal of
Geodynamics 29, 459-467
Schwartz, S.Y. (1999), Noncharacteristic behaviour and complex recurrence of large
subduction zone earthquakes, J. Geophys. Res. 104, 23,111-23,125
Shearer, P.M. (1999), Introduction to Seismology, Cambridge University Press
Smith, D.E. et al. (1990), Tectonic Motion and Deformation From Satellite Laser
Ranging to LAGEOS, J. Geophys. Res. 95, 22,013-22,041
34
Smith, W.D. (1998), Resolution and significance assessment of precursory changes in
mean earthquake magnitudes, Geophys. J. Int. 135, 515-522
Stein, R.S., Barka, A.A. & Dieterich, J.H. (1997), Progressive failure on the North
Anatolian fault since 1939 by earthquake stress triggering, Geophys. J. Int. 128, 594604
Stein, R.S. (1999), The role of stress transfer in earthquake occurrence, Nature 402,
605-609
Stein, S. & Wysession, M. (2003), An Introduction to Seismology, Earthquakes, and
Earth Structure, Blackwell Publishing
Toda, S., Stein, R.S., Reasenberg, P.A., Dieterich, J.H. & Yoshida, A (1998), Stress
transferred by the Mw=6.9 Kobe, Japan, shock: Effect on aftershocks and future
earthquake probabilities, J. Geophys. Res. 103, 24,543-24,565
Vine, F.J. & Matthews, D.H. (1963), Magnetic anomalies over ocean ridges, Nature
199, 947-949
Young, J.B. Et al. (1996), The Flinn-Engdahl Regionalisation Scheme: the 1995
revision, Physics of the Earth and Planetary Interiors 96, 223-297
35
9
APPENDICES
36
APPENDIX I: ORIGINAL PROJECT PROPOSAL
Prediction of global earthquake occurrence rates for estimation of seismic hazard.
Professor Ian Main and Professor Roy Thompson
[email protected]
Global satellite data have confirmed that the motion of the earth’s plates in the last 10
years matches that from 3Ma of Palaeomagnetic data. Thus tectonic forcing for
earthquakes is remarkably constant in time. Consequently, deformation rates can
provide strong constraints on earthquakes. By contrast, extrapolations from short-term
earthquake catalogues are likely to under-report the largest events. The key to
connecting seismic and tectonic deformation is the seismic moment tensor. The scalar
seismic moment is the product of the earth’s rigidity, the rupture area, and the average
slip on the source fault. The seismic moment tensor is then the product of the scalar
moment with a unit tensor describing the orientation of the source and its type. Data
since 1977 are available on the world-wide web.
The main aim of this project is to predict the recurrence rates for the largest, most
damaging earthquakes. Recurrence rates are strongly constrained by effects such as
finite moment release rates, and/or the finite width of the brittle crust. Therefore,
frequency-magnitude data will be used to pick the simplest finite-size model, from a
range of physically plausible candidates. Second, the seismic and tectonic
deformation rates will be compared, and the former used to constrain extrapolations
from the available frequency-magnitude data to longer timescales. This has previously
been carried out for only one of the possible forms for this distribution. Here the data
will be allowed to determine the best finite-size distribution in an objective way. For
this comparison, seismic moment tensors will be added to predict local deformation
rates in three dimensions, and the compare local deformation rates in three
dimensions. The combined results will be of practical use in predicting the timeindependent component of future global seismic hazard.
Finally, how important is a time-dependant component in seismic hazard? The recent
earthquake in Izmit, Turkey, was closely followed by a large triggered event near
Duzke, also on the Anatolian fault. The student will investigate the clustering
properties of earthquakes in time and space, on a global scale. This will complement
recent work where only the source orientation was considered. The results will help to
predict the location, fault orientation, and probability of a triggered event, as a
function of time after the source event. The student will be trained in modern
techniques of seismology and tectonics, including forward modelling, estimation of
seismic hazard, fundamental statistics, and the use of satellite based geodetic data.
Computer programming will combine and modify existing computer codes for the
study.
Main, I.G., et al (1999). One slope or two? Detecting statistically-significant breaks
of slope in geophysical data, with application to fracture scaling relationships.
Geophys. Res. Lett. 26, 2801-2804.
37
Main, I.G., D. Irving, R. Musson & A. Reading (1999). Constraints on the frequencymagnitude relation and maximum magnitudes in the UK from observed seismicity
and glacio-isostatic recovery rates. Geophys. J. Int. 137, 535-550.
Main, I.G., O’Brien & J. Henderson (2000). Statistical physics of earthquakes in a
spring-block-slider model: a comparison of exponents in the frequency distributions
for source rupture area and local strain energy. J. Geophys. Res. 105, 6105-6126.
38
APPENDIX II: STUDENT TRAINING RECORD
1 - Transferable Skills courses attended
(a)
Computing Courses, including
Unix 1, Unix 2, Unix 3 (Operating system - use of shell programming)
Fortran 90, C, Perl (Programming languages)
Fundamental Concepts of High Performance Computing, Shared Memory
Programming (EPCC parallel programming courses)
S-Plus (Statistical package)
HTML, More HTML
LATEX (mark-up language for preparation of documents)
(b)
Effective Presentations
(c)
Time Management
2 - Demonstration Experience
Demonstrated assorted Geophysics 2 practical classes and fieldwork, Geology 3
Maths, Geophysics 4 computing, and demonstrated electromagnetic methods on 2002
Geophysics 4 field course in Belgium.
3 - Administrative Experience
Member of 2002-2003 Gradschool committee, maintained Gradschool website and
represented postgraduates on Department Computer Committee meetings. Helped
organise 2003 Gradschool Conference and produced Abstract volume and website for
conference.
39