Is the reliable prediction of individual earthquakes a

25 February 1999
Is the reliable prediction of individual earthquakes a realistic
scientific goal?
IAN MAIN
The recent earthquake in Colombia (Fig. 1) has once again illustrated to the general public the
inability of science to predict such natural catastrophes. Despite the significant global effort that has
gone into the investigation of the nucleation process of earthquakes, such events still seem to strike
suddenly and without obvious warning. Not all natural catastrophes are so apparently unpredictable,
however.
Figure 1 Devastation caused by the recent earthquake in Colombia.
High resolution image and legend (359k)
For example, the explosive eruption of Mount St Helens in 1980 was preceded by visible ground
deformation of up to 1 metre per day, by eruptions of gas and steam, and by thousands of small earthquakes,
culminating in the magnitude 5 event that finally breached the carapace. In this example, nearly two decades
ago now, the general public had been given official warning of the likelihood of such an event, on a
timescale of a few months. So, if other sudden onset natural disasters can be predicted to some degree, what
is special about earthquakes? Why have no unambiguous, reliable precursors been observed, as they
commonly are in laboratory tests (see, for example, Fig. 2)? In the absence of reliable, accurate prediction
methods, what should we do instead? How far should we go in even trying to predict earthquakes?
Figure 2 Comparison of laboratory21 and field22 measurements of precursory strain (solid
lines).
High resolution image and legend (100k)
The idea that science cannot predict everything is not new; it dates back to the 1755 Great Lisbon
earthquake, which shattered contemporary European belief in a benign, predictable Universe1. In the
eighteenth century 'Age of Reason', the picture of a predictable Universe1 was based on the spectacular
success of linear mathematics, such as Newton's theory of gravitation. The history of science during this
century has to some extent echoed this earlier debate. Theories from the earlier part of the century, such as
Einstein's relativity, and the development of quantum mechanics, were found to be spectacularly, even
terrifyingly, successful when tested against experiment and observation. Such success was mirrored in the
increasing faith that the general public placed in science. However, the century is closing with the gradual
realization by both practitioners and the general public that we should not expect scientific predictions to be
infallible. Even simple nonlinear systems can exhibit 'chaotic' behaviour, whereas more 'complex' nonlinear
systems, with lots of interacting elements, can produce remarkable statistical stability while retaining an
inherently random (if not completely chaotic) component2. The null hypothesis to be disproved is not that
earthquakes are predictable, but that they are not.
The question to be addressed in this debate is whether the accurate, reliable prediction of individual
earthquakes is a realistic scientific goal, and, if not, how far should we go in attempting to assess the
predictability of the earthquake generation process? Recent research and observation have shown that the
process of seismogenesis is not completely random — earthquakes tend to be localized in space, primarily on
plate boundaries, and seem to be clustered in time more than would be expected for a random process. The
scale-invariant nature of fault morphology, the earthquake frequency-magnitude distribution, the
spatiotemporal clustering of earthquakes, the relatively constant dynamic stress drop, and the apparent ease
with which earthquakes can be triggered by small perturbations in stress are all testament to a degree of
determinism and predictability in the properties of earthquake populations3,4. The debate here centres on the
prediction of individual events.
For the purposes of this debate, we define a sliding scale of earthquake 'prediction' as follows.
1. Time-independent hazard. We assume that earthquakes are a random (Poisson) process in time, and
use past locations of earthquakes, active faults, geological recurrence times and/or fault slip rates
from plate tectonic or satellite data to constrain the future long-term seismic hazard5. We then
calculate the likely occurrence of ground-shaking from a combination of source magnitude
probability with path and site effects, and include a calculation of the associated errors. Such
calculations can also be used in building design and planning of land use, and for the estimation of
earthquake insurance.
2. Time-dependent hazard. Here we accept a degree of predictability in the process, in that the seismic
hazard varies with time. We might include linear theories, where the hazard increases after the last
previous event6, or the idea of a 'characteristic earthquake' with a relatively similar magnitude,
location and approximate repeat time predicted from the geological dating of previous events7.
Surprisingly, the tendency of earthquakes to cluster in space and time include the possibility of a
seismic hazard that actually decreases with time8. This would allow the refinement of hazard to
include the time and duration of a building's use as a variable in calculating the seismic risk.
3. Earthquake forecasting. Here we would try to predict some of the features of an impending
earthquake, usually on the basis of the observation of a precursory signal. The prediction would still
be probabilistic, in the sense that the precise magnitude, time and location might not be given
precisely or reliably, but that there is some physical connection above the level of chance between the
observation of a precursor and the subsequent event. Forecasting would also have to include a precise
statement of the probabilities and errors involved, and would have to demonstrate more predictability
than the clustering referred to in time-dependent hazard. The practical utility of this would be to
enable the relevant authorities to prepare for an impending event on a timescale of months to weeks.
Practical difficulties include identifying reliable, unambiguous precursors9-11, and the acceptance of
an inherent proportion of missed events or false alarms, involving evacuation for up to several
months at a time, resulting in a loss of public confidence.
4. Deterministic prediction. Earthquakes are inherently predictable. We can reliably know in advance
their location (latitude, longitude and depth), magnitude, and time of occurrence, all within narrow
limits (again above the level of chance), so that a planned evacuation can take place.
Time-independent hazard has now been standard practice for three decades, although new information from
geological and satellite data is increasingly being used as a constraint. In contrast, few seismologists would
argue that deterministic prediction as defined above is a reasonable goal in the medium term, if not for
ever12. In the USA, the emphasis has long been shifted to a better fundamental understanding of the
earthquake process, and on an improved calculation of the seismic hazard, apart from an unsuccessful
attempt to monitor precursors to an earthquake near Parkfield, California, which failed to materialize on
time. In Japan, particularly in the aftermath of the Kobe earthquake in 1995, there is a growing realization
that successful earthquake prediction might not be realistic13. In China, thirty false alarms have brought
power lines and business operations to a standstill in the past three years, leading to recent government plans
to clamp down on unofficial 'predictions'14.
So, if we cannot predict individual earthquakes reliably and accurately with current knowledge15-20, how far
should we go in investigating the degree of predictability that might exist?
Ian Main
Department of Geology and Geophysics, University of Edinburgh, Edinburgh, UK
References
1.
2.
3.
4.
5.
6.
Voltaire, Candide (Penguin, London, 1997, first published 1759).
Bak, P. How Nature Works: The Science of Self-organised Criticality (Oxford Univ. Press, 1997).
Turcotte, D.L. Fractals and Chaos in Geology and Geophysics (Cambridge Univ. Press, 1991).
Main, I., Statistical physics, seismogenesis and seismic hazard, Rev. Geophys. 34, 433-462 (1996).
Reiter, L. Earthquake Hazard Analysis (Columbia Univ. Press, New York, 1991).
Shimazaki, K. & Nakata, T., Time-predictable recurrence model for large earthquakes, Geophys. Res.
Lett. 7, 279-283 (1980).
7. Schwartz, D.P. & Coppersmith, K.J., Fault behavior and characteristic earthquakes: Examples from
the Wasatch and San Andreas fault systems, J. Geophys. Res. 89, 5681-5696 (1984).
8. Davis, P.M., Jackson, D.D. & Kagan, Y.Y., The longer its been since the last earthquake, the longer
the expected time till the next?, Bull. Seism. Soc. Am. 79, 1439-1456 (1989).
9. Wyss, M., Second round of evaluation of proposed earthquake precursors, Pure Appl. Geophys. 149,
3-16 (1991).
10. Campbell, W.H. A misuse of public funds: UN support for geomagnetic forecasting of earthquakes
and meteorological disasters, Eos Trans. Am. Geophys. Union 79, 463-465 (1998).
11. Scholz, C.H. The Mechanics of Earthquakes and Faulting (Cambridge Univ. Press, 1990).
12. Main, I., Earthquakes - Long odds on prediction, Nature 385, 19-20 (1997).
13. Saegusa, A., Japan tries to understand quakes, not predict them, Nature 397, 284 (1999).
14. Saegusa, A., China clamps down on inaccurate warnings, Nature 397, 284 (1999).
15. Macelwane, J.B., Forecasting earthquakes, Bull. Seism. Soc. Am. 36, 1-4 (1946).
16. Turcotte, D.H., Earthquake prediction, A. Rev. Earth Planet. Sci. 19, 263-281 (1991).
17. Sneider, R. & van Eck, T., Earthquake prediction: a political problem?, Geol. Rdsch. 86, 446-463
(1997).
18. Jordan, T.H., Is the study of earthquakes a basic science?, Seismol. Res. Lett. 68, 259-261 (1997).
19. Evans, R., Asessment of schemes for earthquake prediction: editor's introduction, Geophys. J. Int.
131, 413-420 (1997).
20. Geller, R.J., Earthquake prediction: a critical review, Geophys. J. Int. 131 425-450 (1997).
21. Main, I.G., Sammonds P.R. & Meredith, P.G., Application of a modified Griffith criterion to the
evolution of fractal damage during compressional rock failure, Geophys. J. Int. 115, 367-380 (1993).
22. Argus, D. & Lyzenga, G.A., Site velocities before and after the Loma Prieta and the Gulf of Alaska
earthquakes determined from VLBI, Geophys. Res. Lett. 21, 333-336 (1994).
Earthquake prediction: is this debate necessary?
ROBERT J. GELLER
Because large earthquakes release huge amounts of energy, many researchers have thought that there
ought to be some precursory phenomena that could be consistently observed and identified, and used
as the basis for making reliable and accurate predictions. Over the past 100 years, and particularly
since 1960, great efforts, all unsuccessful, have been made to find such hypothetical precursors. For
further details see my review1, which includes eight pages of references (in 6-point type, to save space)
to this vast body of work.
The public, media, and government regard an 'earthquake prediction' as an alarm of an imminent large
earthquake, with enough accuracy and reliability to take measures such as the evacuation of cities.
'Prediction' is used exclusively in the above sense here; in other words, longer-term forecasts of seismic
hazards or statistical forecasts of aftershock probabilities are not classified as predictions.
Three obvious questions arise:
1. What pitfalls have undermined prediction research?
2. Why are earthquakes so difficult to predict?
3. Why is prediction still being discussed?
These questions are answered below.
Most earthquake prediction research is empirical, featuring the 'case-study' approach. After a large
earthquake, data of all sorts are examined retrospectively in the hope of finding a precursory signal. Workers
reporting candidate precursors frequently set up observatories to look for similar signals before future
earthquakes.
Empiricism should not necessarily be dismissed out of hand, as it has led to many important scientific
discoveries. However, as noted by E.B. Wilson, without proper controls the empirical approach can lead to
absurd conclusions, for example that the beating of tom-tom drums will restore the Sun after an eclipse. Lack
of controls is one of the main problems that has dogged the search for precursors.
Another chronic problem is attributing 'anomalous' signals to earthquakes before considering more plausible
explanations. One research group has repeatedly claimed to be observing electrical precursors of earthquakes
(and even managed to get relatively favourable publicity in Nature's news columns2,3), but it now seems
likely that the signals are noise due to nearby digital radio-telecommunications transmitters, and are
unrelated to earthquakes4.
Rigorous statistical analyses are rarely performed by prediction researchers, leading to a plethora of marginal
claims. There are two main problems. First, most precursor claims involve retrospective studies, and it is
easy to 'tune' parameters after the fact to produce apparently significant correlations that are actually bogus5.
Second, earthquakes are clustered in space and time, and spuriously high levels of statistical significance can
easily be obtained unless appropriate null hypotheses are used6,7.
Why is prediction so difficult? This question cannot be answered conclusively, as we do not yet have a
definitive theory of the seismic source. The Earth's crust (where almost all earthquakes occur) is highly
heterogeneous, as is the distribution of strength and stored elastic strain energy. The earthquake source
process seems to be extremely sensitive to small variations in the initial conditions (as are fracture and
failure processes in general). There is complex and highly nonlinear interaction between faults in the crust,
making prediction yet more difficult. In short, there is no good reason to think that earthquakes ought to be
predictable in the first place. A few laboratory failure experiments might seem to suggest otherwise, but they
are conducted on a limited scale and do not replicate the complex and heterogeneous conditions of the
problem in situ.
If reliable and accurate prediction is impossible now and for the foreseeable future, why is it being debated
on Nature's web site? The answer seems to be sociological rather than scientific. Certain research topics are
fatally attractive to both scientists and the general public, owing to the combination of their extreme
difficulty and great potential reward. No less a scientist than Sir Isaac Newton regarded alchemy (the
transmutation of elements by chemical reactions) as his primary research field. His continued failures drove
him to despair, and led him to give up science for a sinecure as Master of the Mint. Sir Isaac's failures
notwithstanding, alchemy continued to attract the fruitless efforts of talented scientists for another 100 years.
Earthquake prediction seems to be the alchemy of our times.
The examples of alchemy and perpetual motion machines show that the only way to 'prove' something is
impossible is by developing a satisfactory theory of the underlying phenomenon (nuclear physics and
thermodynamics, respectively). No satisfactory theory of the earthquake source process exists at present.
Further work should be encouraged, but it will probably lead to a better understanding of why prediction is
effectively impossible rather than to effective methods for prediction.
Governments in many countries have awarded lavish funding for work on earthquake prediction1. Such
funding frequently greatly exceeds what is available through normal peer-reviewed channels for even highly
meritorious work. It is regrettable that this disparity sometimes induces reputable scientists to label their
work as 'earthquake prediction research' to get a share of such funding.
In view of the bleak prospects, there is no obvious need for specialized organizations and research
programmes for prediction. Researchers in this area should seek funding through normal peer-reviewed
channels (such as the NSF in the USA), in competition with all other research in earthquake science. This
would probably lead to an almost complete phasing out of prediction research, not because of censorship but
rather owing to the poor quality of most present work in this field. Of course meritorious prediction
proposals (if any exist) should be funded.
More importantly, meritorious work on estimating long-term seismic hazards, real-time seismology and
improving design standards for earthquake-resistant construction should be funded, along with basic research
and the operation of observational networks, as the key components in an integrated programme of
seismological research.
Now that prediction research is under pressure in many countries, including Japan8, some prediction
proponents might seek to reposition their work as one component of such an integrated research programme
for the reduction of seismic hazards. However, in view of the goals and methods of prediction research, this
seems unwarranted.
Under the ground rules of this debate, participants are not allowed to see other contributions before
publication. However, unlike earthquakes themselves, the arguments used by prediction proponents are
eminently predictable. See box for a rebuttal of some of the arguments that are likely to be used by the other
side in this debate.
The sad history of earthquake prediction research teaches us a lesson that we should already have learned
from cold fusion, polywater and similar debacles. Namely, the potential importance of a particular research
topic should not induce a lowering of scientific standards. In the long run (and in the short run too), science
progresses when rigorous research methodology is followed.
Robert J. Geller
Department of Earth and Planetary Physics,
Graduate School of Science,
Tokyo University,
Bunkyo, Tokyo 113-0033,
Japan.
[email protected]
References
1. Geller, R.J. Earthquake prediction: a critical review. Geophys. J. Int. 131, 425-450 (1997).
2. Masood, E. Greek earthquake stirs controversy over claims for prediction method. Nature 375, 617
(1995).
3. Masood, E. Court charges open split in Greek earthquake experts. Nature 377, 375 (1995).
4. Pham, V.N., Boyer, D., Chouliaras, G., LeMouël, J., Rossignol, J.C. & Stavrakakis, G.
Characteristics of electromagnetic noise in the Ioannina region (Greece); a possible origin for so
called 'seismic electric signals' (SES). Geophys. Res. Lett. 25, 2229-2232 (1998).
5. Mulargia, F. Retrospective validation of the time association of precursors. Geophys. J. Int. 131, 500504 (1997).
6. Kagan, Y. VAN earthquake predictions: an attempt at statistical evaluation. Geophys. Res. Lett. 23,
1315-1318 (1996).
7. Stark, P.B. Earthquake prediction: the null hypothesis. Geophys. J. Int. 131, 495-499 (1997).
8. Saegusa, A. Japan to try to understand quakes, not predict them. Nature 397, 284 (1999).
9. Frisch, U. Turbulence (Cambridge Univ. Press, Cambridge, 1995).
10. Abercrombie, R.E. & Mori, J. Occurrence patterns of foreshocks to large earthquakes in the western
United States. Nature 381, 303-307 (1996).
Box
1. Claim: Prediction efforts in other fields, such as weather prediction, have slowly made progress in the
face of great difficulties. Why should earthquake prediction not be the same?
Facts: Aside from the word 'prediction', forecasts of the evolution of large-scale weather systems
have almost nothing in common with predicting the occurrence of large earthquakes. A better
analogy from fluid mechanics is turbulence, a field in which quantitative prediction has proved to be
as difficult as earthquake prediction9.
2. Claim: Many large earthquakes are preceded by foreshocks. If foreshocks are the result of some
'preparatory process' that precedes large earthquakes, it ought to be possible to monitor this process,
and thereby predict earthquakes
Facts: 'Foreshocks' can be identified retrospectively; they are small earthquakes that happened to
precede nearby larger earthquakes. However, no way to distinguish foreshocks prospectively from
other small earthquakes has ever been found10. The fact that foreshocks occur under almost exactly
the same conditions as subsequent large earthquakes that release orders of magnitude more energy
suggests that large earthquakes do not have a predetermined preparatory process, but rather that any
small earthquake has some probability of cascading into a large event. Whether or not this happens
seems to depend on fine and unmeasurable details of the initial conditions, thereby underscoring the
difficulty of prediction.
3. Claim: Other research (such as Columbus's proposal for a voyage of exploration, efforts to build
motorized aircraft and Fulton's steamboat) was claimed by critics to be impossible, just as earthquake
prediction critics are doing now. Why should we let such negativism stop us from working on
earthquake prediction?
Facts: This is an obvious non sequitur. Each case must be considered on its merits. And there are just
as many examples, if not more, where the critics were right.
4. Claim: Earthquake prediction research is just like cancer research. No matter how difficult the
problem may be, we have to work on it.
Facts: If we want to minimize deaths and damage due to earthquakes, the best way is through
soundly based efforts at hazard mitigation. Issuing inaccurate and unreliable earthquake predictions
would be harmful, not helpful, to society.
5. Claim: We simply do not have enough data now to say whether or not earthquakes are predictable.
We need a new programme of observations (that would only cost a few cents per day for every man,
woman and child on the planet) to find earthquake precursors.
Facts: Enormous efforts have been made to find precursors, with no success, despite great
improvements in technology for geophysical observations. It is safe to say there are no glaringly
obvious consistent and reliable precursors that could be used as the basis for issuing alarms.
6. Claim: The critics are wrong to say earthquake prediction is impossible, because my methods allow
me to make successful earthquake predictions right now, and I have a long list of earthquakes I have
already successfully predicted.
Facts: Many people continue to make such claims, but all are groundless1. In essence all claims of
this type are based on issuing large numbers of vague 'predictions', waiting for earthquakes to
happen, and then stating, a posteriori, 'that's the quake I predicted'. None of the various methods in
this category has ever been rigorously shown to outperform a sensible null hypothesis, although
statistical studies with 'straw-man' null hypotheses sometimes make such claims.
To return to Nature Debates please close this window.
Not yet, but eventually
MAX WYSS
Unfortunately, it is typical for debates about earthquake prediction research to be based in part on
incorrect assertions1,2. The first two sentences in the moderator's introduction follow this tradition.
Contrary to his suggestion, the recent earthquake in Colombia has done nothing to show the inability
or ability of science to predict earthquakes, because this problem simply has not been studied in
Colombia.
Significant global effort?
Also, to say that a "significant global effort [...] has gone into the investigation of the nucleation process of
earthquakes" does not present the correct picture. When the USA or Great Britain seriously mean to solve a
problem, they put several orders of magnitude more resources into it, as they have done during the past few
decades in detecting underground nuclear explosions and discriminating between them and natural seismic
phenomena. The massive effort of rigorous science that is actually needed if we want to understand the
nucleation process of earthquakes is not made in any country.
Do all earthquakes strike suddenly?
The next statement in the introduction, that "earthquakes [...] appear to strike suddenly," is a phrase used by
people who wish to argue that earthquake prediction is impossible, however, it does not reflect the facts
correctly. We should not argue about the facts in scientific discussions. Yet the small group of scientists who
argue that earthquake prediction is impossible1,2, as well as those advocating the view that the problem of
earthquake prediction is already solved3, often distort the facts.
The fact is that many main shocks do not occur "suddenly": 10-30% of them are preceded by foreshocks
during the week before their occurrence4-8; some are preceded by year-long pre-activity9; some are preceded
by increased moment release during the years before them10-14, and some are preceded by seismic
quiescence15-17. On the basis of these seismicity patterns, some earthquakes have been predicted correctly18-24
and one case has been predicted, but with incorrect parameters25-27.
What is a successful prediction?
By defining a useful earthquake prediction as one based on which "a planned evacuation can take place", the
moderator sets up the rejection of the idea that earthquake prediction should be studied, because it allows
him to make the misleading statement that "few seismologists would argue that deterministic prediction as
defined above [my italics] is a reasonable goal". The well-known element of randomness in the initiation of a
large rupture28, which comes into play at the very end of an energy loading cycle, foils the use of short
prediction windows in most cases. Nevertheless, many benefits derive from predictions that have time
windows of up to several years. These have been spelled out repeatedly. Hence, most seismologists would
agree that any well-formulated and well-founded prediction is useful.
Advances of science might be unexpected
When the "standard practice" of "time-independent hazard" estimates is held up in the introduction as solid,
well-established science, this is in the mainstream of engineering seismological opinion, but some serious
shortcomings of this method have recently been documented. The existence of precariously balanced rocks
near active faults in the Western USA29-32 shows clearly that accelerations calculated by the "standard
practice" are overestimated in many locations.
Also, we are developing evidence that asperities are the only parts of faults containing information about the
frequency of main shocks, and that a new method of estimating local recurrence time might correct the
flawed estimates by the "standard practice," which relies in part on information from creeping fault segments
that are not capable of generating main shocks33,34. My point is that, regardless of how well accepted or
attacked some research fields and methods are, curious human beings will always investigate further and
eventually come up with advances of our knowledge, including unexpected rejection of standard practice,
and will arrive at solutions to problems hitherto thought by some to be unsolvable.
Main problems plaguing prediction research
The problems plaguing earthquake prediction research on which we should focus, in my opinion, are (a) the
improvement, or elimination from journals, of scientifically weak work35-37, and (b) the exposure of work
that contains errors38,39 and statements made by scientifically unqualified publicity seekers. Unfortunately,
human psychology is such that hasty workers and true believers will always mess around with the problem of
earthquake prediction that fascinates them. Therefore, we must learn how to conduct rigorous, quantitative
prediction research in spite of the distractions generated by unqualified people.
We have not yet arrived at this point. Currently, funding for earthquake prediction research in most Western
countries is puny because it is considered a 'hot potato' by most funding agencies and many peer reviewers.
The future of earthquake prediction research
So, what about the future of studying the earthquake failure process applied to possible prediction? I am
pessimistic about the near future and optimistic about the long term. It seems that we are destined to hear
more invalid statements in the debate about the value of earthquake prediction research. However, there can
be no doubt that a preparatory process to earthquake rupture exists (foreshocks demonstrate this), and I am
confident that ingenious and resilient people, who will come after us and will be amused by this tempest in a
teapot about the prediction of earthquakes, will eventually improve our ability to predict some earthquakes in
favourable areas, although not often with time windows as short as demanded by the moderator.
Max Wyss
Geophysical Institute, University of Alaska, Fairbanks, Alaska, USA
References
1. Geller, R.J., Jackson, D.D., Kagan, Y.Y. & Mulargia, F. Earthquakes cannot be predicted. Science
275, 1616-1617 (1997).
2. Wyss, M. Cannot earthquakes be predicted? Science 278, 487-488 (1997).
3. Varotsos, P., Eftaxias, K., Vallianatos, F. & Lazaridou, T. Basic principles for evaluating an
earthquake prediction method. Geophys. Res. Lett. 23, 1295-1298 (1996).
4. Jones, L.M. Foreshocks (1966-1980) in the San Andreas system, California. Bull. Seism. Soc. Am. 74,
1361-1380 (1984).
5. Shibazaki, B. & Matsu'ura, M. Foreshocks and pre-events associated with the nucleation of large
earthquakes. Geophys. Res. Lett. 22, 1305-1308 (1995).
6. Console, R. & Murru, M. Probability gain due to foreshocks following quiescence tested by synthetic
catalogs. Bull. Seism. Soc. Am. 86, 911-913 (1996).
7. Maeda, K. The use of foreshocks in probabilistic prediction along the Japan and Kuril Trenches. Bull.
Seism. Soc. Am. 86, 242-254 (1996).
8. Ogata, Y., Utsu, T. & Katsura, K. Statistical discrimination of foreshocks from other earthquake
clusters. Geophys. J. Int. 127, 17-30 (1996).
9. Bowman, J.R. A seismic precursor to a sequence of Ms 6.3-6.7 midplate earthquakes in Australia.
Pure Appl. Geophys 149, 61-78 (1997).
10. Varnes, D.J. Predicting earthquakes by analyzing accelerating precursory seismic activity. Pure Appl.
Geophys 130, 661-686 (1989).
11. Bufe, C.G., Nishenko, S.P. & Varnes, D.J. Seismicity trends and potential for large earthquakes in the
Alaska-Aleutian region. Pure Appl. Geophys 142, 83-99 (1994).
12. Bufe, C.G. & Varnes, D.J. Time-to-failure in the Alaska-Aleutian region: an update. Eos 77, F456
(1996).
13. Sykes, L.R. & Jaume, S.C. Seismic activity on neighboring faults as a long-term precursor to large
earthquakes in the San Francisco Bay area. Nature 348, 595-599 (1990).
14. Bowman, D.D., Ouillon, G., Sammis, C.G., Sornette, A. & Sornette, D. An observational test of the
critical earthquake concept. J. Geophys. Res. 103, 24359-24372 (1998).
15. Wiemer, S. & Wyss, M. Seismic quiescence before the Landers (M=7.5) and Big Bear (M=6.5) 1992
earthquakes. Bull. Seism. Soc. Am. 84, 900-916 (1994).
16. Wyss, M., Shimazaki, K. & Urabe, T. Quantitative mapping of a precursory quiescence to the IzuOshima 1990 (M6.5) earthquake, Japan. Geophys. J. Int. 127, 735-743 (1996).
17. Wyss, M. & Martyrosian, A.H. Seismic quiescence before the M7, 1988, Spitak earthquake,
Armenia. Geophys. J. Int. 134, 329-340 (1998).
18. Dieterich, J.H. et al. Probabilities of Large Earthquakes in the San Francisco Bay Region, California
(U. S. Geol. Surv. Circular 1053, Washington, DC, 1990).
19. Sykes, L.R. & Nishenko, S.P. Probabilities of occurrence of large plate rupturing earthquakes for the
San Andreas, San Jacinto, and Imperial faults, California. J. Geophys. Res. 89, 5905-5927 (1984).
20. Scholz, C.H. The Black Mountain asperity: seismic hazard of the southern San Francisco peninsula,
California. Geophys. Res. Lett. 12, 717-719 (1985).
21. Nishenko, S.P. et al. 1996 Delarof Islands earthquake-a successful earthquake forecast/prediction?
Eos 77, F456 (1996).
22. Wyss, M. & Burford, R.O. Current episodes of seismic quiescence along the San Andreas Fault
between San Juan Bautista and Stone Canyon, California: Possible precursors to local moderate main
shocks. U.S. Geol. Survey Open-File Rep. 85-754, 367-426 (1985).
23. Wyss, M. & Burford, R.O. A predicted earthquake on the San Andreas fault, California. Nature 329,
323-325 (1987).
24. Kossobokov, V.G., Healy, J.H. & Dewey, J.W. Testing an earthquake prediction algorithm. Pure
Appl. Geophys 149, 219-248 (1997).
25. Kisslinger, C. An experiment in earthquake prediction and the 7 May 1986 Andreanof Islands
earthquake. Bull. Seism. Soc. Am. 78, 218-229 (1988).
26. Kisslinger, C., McDonald, C. & Bowman, J.R. in IASPEI, 23rd General Assembly (Tokyo, Japan) 32
(1985).
27. Kisslinger, C. in Meeting of the National Earthquake Prediction Evaluation Council (Anchorage,
Alaska) 119-134 (U.S. Geol. Surv. Open-file Rep. 86-92, 1986).
28. Brune, J.N. Implications of earthquake triggering and rupture propagation for earthquake prediction
based on premonitory phenomena. J. Geophys. Res. 84, 2195-2198 (1979).
29. Brune, J.N. Precariously balanced rocks and ground-motion maps for Southern California. Bull.
Seism. Soc. Am. 86, 43-54 (1996).
30. Bell J.W., Brune, J.N., Liu, T., Zreda, M. & Yount, J.C. Dating precariously balanced rocks in
seismically active parts of California and Nevada. Geology 26, 495-498 (1998).
31. Brune, J.N., Precarious rocks along the Mojave section of the San Andreas fault, California:
constraints on ground motion from great earthquakes. Seism. Res. Letts 70, 29-33 (1999).
32. Brune, J.N., Bell, J.W. & Anooshehpoor, A. Precariously balanced rocks and seismic risk. Endeavour
N.S. 20, 168-172 (1996).
33. Wiemer, S. & Wyss, M. Mapping the frequency-magnitude distribution in asperities: An improved
technique to calculate recurrence times? J. Geophys. Res. 102, 15115-15128 (1997).
34. Wyss, M. & Wiemer, S. Local recurrence time in seismically active areas may be the most important
estimate of seismic hazard. Eos 79, F644 (1998).
35. Wyss, M. Evaluation of proposed earthquake precursors (American Geophysical Union,
Washington, DC, 1991).
36. Wyss, M. Second round of evaluations of proposed earthquake precursors. Pure Appl. Geophys 149,
3-16 (1997).
37. Wyss, M. & Booth, D.C. The IASPEI procedure for the evaluation of earthquake precursors.
Geophys. J. Int. 131, 423-424 (1997).
38. Geller, R.J. Debate on evaluation of the VAN method: Editor's introduction. Geophys. Res. Lett. 23,
1291-1294 (1996).
39. Geller, R.J. in A Critical Review of Van (Lighthill, J.H. ed.) 155-238 (World Scientific, London,
1996).
Earthquake precursors and crustal 'transients'
PASCAL BERNARD
For the public, the main question that seismologists should ask themselves is, "Can earthquakes be
predicted?". Nature's earthquake prediction debate follows this simple line of inquiry, although presented
in a slightly more subtle form by Ian Main: "How accurately and reliably can we predict earthquakes, and
how far can we go in investigating the degree of predictability that might exist?" This is still, however, a
question formulated under social pressure. I argue that this question should be left to one side by scientists
to allow progress in a more general and comprehensive framework, by studying the whole set of crustal
instabilities — or 'transients' — and not only earthquake precursors.
First I shall outline the major observations relevant to this problem, and the two standard models for earthquake
occurrence and predictability. I shall then comment briefly on these models and show how a more general
approach could lead to a better understanding of earthquake predictability.
Relevant observations of crustal instabilities
O1: Continuous or transient aseismic fault slip is reported for several major faults that reach the Earth's surface1.
This slip might involve only the upper few kilometres of the fault or, for some fault segments, it might involve the
whole thickness of the brittle crust. The transient creep events occur at various time scales (hours, days or months).
O2: Silent and slow earthquakes observed at long periods show that significant transient, low-frequency slip
events can occur on faults on a timescale of minutes2. The reported seismic nucleation phases, lasting from
fractions of a second to seconds, seem to scale with the final rupture size, and sometimes with the dimension of the
pre-shock cluster, if such a cluster exists3.
O3: Fluid migration instabilities in the crust have been reported from studies of the mineralization of veins, nearsurface measurements of groundwater geochemistry and pore-pressure measurements in deep boreholes4,5; nonhydrostatic pore pressure at depths of several kilometres is observed in many places.
O4: Seismicity is not a Poisson process: clusters of earthquakes can last from hours to years, and have reported
dimensions from hundreds of metres to hundreds of kilometres6; seismic quiescence on various spatial scales has
been reported to have occurred on a time scale of years7.
O5: Earthquake sizes have power-law distributions (possibly up to some finite magnitude threshold).
O6: Size and roughness of fault segments follow power-law distributions; borehole logs of rock parameters (such
as density and velocity) also reveal power-law distributions8.
Two standard models
M1: Processes reported in O1 to O4, and their subsequent effects (such as ground deformation and
electromagnetic effects) can sometimes be recognized (retrospectively) as being precursors to large earthquakes3,9.
This is the basis for the preparation-zone paradigm in seismogenesis.
M2: Observations O5 and O6 provides the basis for self-organized critical models for the crust (SOC), or similar
models leading to a chaotic system with a large degree of freedom, in which earthquakes are inherently
unpredictable in size, space and time (such as cascade or avalanche processes)10,12.
Many authors have convincingly shown that proponents of M1 have not been very successful — if at all — in
providing statistical evidence for such correlations between anomalies and earthquakes, nor for stating what would
distinguish a 'precursor-type' from a 'non-precursor-type' anomaly12. Furthermore, it is difficult to explain how the
size of the preparation zone, which is expected to be relatively small, can scale with the final size of large
earthquake.
On model M2, my opinion is that proponents of seismicity's being nearly chaotic are not very convincing either,
because their physical model of the crust is a crude, oversimplified one, from which the important mechanical
processes reported in O1 to O4 are absent.
A generalized SOC model for the crust
To resolve this, one should consider SOC models applied to the whole set of instabilities in the crust (fluid,
aseismic and seismic), not only to the seismic ones. In this more global framework, it would be surprising if the
characteristic parameters of the slow instabilities that span a large range of scales (duration, dimension and
amplitude) did not obey a power-law distribution, just as earthquakes do. Indeed, they all result from nonlinear
processes developing on the same fractal structures: the system of faults and the rock matrix (O5 and O6).
Although we might have to wait for quite a long time before testing this hypothesis with enough observations, as
deep aseismic slip or fluid transients are usually difficult if not impossible to detect from the surface, such a model
does seem quite plausible.
Under this working hypothesis it can be suggested that each type of transient process might trigger not only itself
in cascades, but might sometimes also be coupled to another: fluid instabilities triggering or being triggered by
fault creep, earthquakes triggering or being triggered by fluid instabilities or transient fault creep triggering or
being triggered by earthquakes.
Numerous observations support the existence of these coupled processes, mostly in the shallow crust, where
aseismic processes are dominant13,15. Indirect evidence also exists deeper in the brittle crust, as some foreshock
sequences seem to be triggered by aseismic slip3. The brittle-ductile transition zone might be another favourable
location in which significant transient aseismic processes and seismic instabilities can coexist and be coupled on
the fault system, because the faults zones there might exhibit unstable as well as stable frictional behaviour;
interestingly enough, it also the common nucleation point for large earthquakes.
It can thus be proposed that models M1 and M2 can be merged into a more general framework of crustal
instabilities, still within a SOC model, sometimes displaying coupled processes that lead, in favourable cases, to
the observation of precursors to large earthquakes.
In such a model, the slow instability leading up to the earthquake is expected to remain unpredictable. However, if
one were able to detect and monitor the progression of the slow instability, and to develop a physical model of the
coupling process between the fluid or aseismic transient and the seismic nucleation, one might be able to predict
some characteristics of the impending earthquake.
The remaining problem is the scaling of the precursors to the earthquake size, which could be tackled by
considering that some of the large slow transients (size L1) might lead to seismic ruptures large enough for
breaking a whole asperity (size L2 > L1), thus allowing dynamic propagation at least up to the next large barrier
on the fault (distance L3 >> L2). The possible existence of probabilistic scaling laws between L1 and L2, and
between L2 and L3, might be the condition for the existence of reliable precursors.
What should we do?
Clearly, geophysicists should focus on deciphering and modelling the physics of the frictional and fluid migration
transient processes in the crust16,17. From the observational point of view, differential tomography with active
sources or multiplets, dense arrays of continuous GPS receivers and of borehole strain meters and tilt meters, and
deep borehole observations in fault zones (for tracking the role of fluids directly), might be the key to success.
Hence, to the question, "Is the reliable prediction of individual earthquakes a realistic scientific goal?", my answer
would be in the negative, as this should not yet be a scientific target. However, to the more relevant question, "Is
the understanding of crustal transients an important and realistic scientific goal?", I would answer in the
affirmative, and add that significant progress in this field is required before questions about earthquake
predictability can be answered realistically.
Pascal Bernard
Institut de Physique du Globe de Paris, France
References
1. Gladwin et al. Measurements of the strain field associated with episodic creep events on the San Andreas
fault at San Juan Bautista, California. J. Geophys. Res. 99, 4559-4565 (1994).
2. McGuire et al. Time-domain observations of a slow precursor to the 1994 Romanche transform earthquake.
Science 274, 82-85 (1996).
3. Dodge et al. Detailed observations of California foreshock sequences: implications for the earthquake
initiation process. J. Geophys. Res. 101, 22371-22392 (1996).
4. Hickman et al. Introduction to special section: mechanical involvement of fluids in faulting. J. Geophys.
Res. 100, 12831-12840 (1995).
5. Roeloffs et al. Hydrological effects on water level changes associated with episodic fault creep near
Parkfield, California. J. Geophys. Res. 94, 12387-12402 (1989).
6. Kossobokov and Carslon, Active zone versus activity: A study of different seismicity patterns in the
context of the prediction algorithm M8. J. Geophys. Res. 100, 6431-6441 (1995).
7. Wyss and Martirosyan, Seismic quiescence before the M7, 1998, Spitak earthquake, Armenia. Geophys. J.
Int. 134, 329-340 (1998).
8. Leary, Rock as a critical-point system and the inherent implausibility of reliable earthquake prediction.
Geophys. J. Int. 131, 451-466 (1997).
9. Fraser-Smith et al. Low-frequency magnetic field measurements near the epicenter of the Ms 7.1 Loma
Prieta earthquake. Geophys. Res. Lett. 17, 1465-1468 (1990).
10. Bak and Tang, Earthquakes as a self-organized critical phenomenon. J. Geophys. Res. 94, 15635-15637
(1989).
11. Allègre, C. et al. Scaling organization of fracture tectonic (SOFT) and earthquake mechanism. Phys. Earth
Planet. Int. 92, 215-233 (1995).
12. Geller, R.J. et al. Earthquakes cannot be predicted. Science 275, 1616-1617 (1997).
13. Gwyther et al., Anomalous shear strain at Parkfield during 1993-1994. Geophys. Res. Lett. 23, 2425-2428
(1996).
14. Johnson and McEvilly, Parkfield seismicity: fluid-driven? J. Geophys. Res. 100, 12937-12950 (1995).
15. Leary and Malin, Ground deformation events preceding the homestead valley earthquakes. Bull. Seismol.
Soc. Am. 74, 1799-1817 (1984).
16. Scholz, Earthquake and friction laws. Nature 391, 37-41 (1998).
17. Sibson, Implications of fault-valve behavior for rupture nucleation and recurrence. Tectonophysics 211,
283-293 (1992).
How well can we predict earthquakes?
ANDREW MICHAEL
How well can we predict earthquakes? As suggested in Ian Main's introduction to this forum, we can
easily predict the behaviour of populations of earthquakes and we clearly cannot completely predict
the behaviour of individual earthquakes. But where is the boundary between the easy and the
impossible? In search of this boundary let us take a tour through Ian Main's four levels of earthquake
prediction.
Level 1, time-independent hazard estimation, clearly shows that we can predict the behaviour of earthquake
populations. Here we seek spatially varying estimates of average earthquake rates. Such calculations are
common and the results are widely used. To argue otherwise you must believe in equal earthquake hazards
for both California and Britain.
Time-dependent earthquake hazard estimation, level 2 in Ian Main's scheme, can be divided into two parts.
Temporal and spatial earthquake clustering, which I shall denote as level 2a, can lead to some definite
improvements over the time-independent estimates. Aftershocks are a major part of any earthquake
catalogue and the largest ones are capable of doing additional damage. Probabilistic estimates of aftershock
rates can be used to aid emergency response and recovery operations after damaging earthquakes1,2.
Although predicting aftershocks is an admirable goal, by definition it does not include predicting the largest
and most damaging earthquakes.
Recognizing foreshocks would allow us to predict these more important events. But no one has been able to
identify which earthquakes are foreshocks. This has limited us to statistical analyses in which we
probabilistically estimate the odds that an earthquake is a foreshock3 or treat each earthquake as a main
shock and allow for the possibility that one of its aftershocks might be larger1,2. In both cases, the
probabilities that any earthquake will be followed by a larger event are only a few per cent over the first
several days. There might also be significant uncertainties in these probabilities4. Understanding earthquake
clustering in terms of stress transfer and rate and state friction laws5-7 might allow us to place these statistical
models on a firmer physical footing, but this will not necessarily reduce these uncertainties.
Earthquake clustering is now a routine time-dependent hazard estimation tool in California. Joint foreshock
and aftershock probabilities are automatically released by the United States Geological Survey and the State
of California after earthquakes over magnitude 5. But does level 2a let us predict the behaviour of individual
earthquakes or merely the behaviour of a population? Predictions based on aftershocks can be fulfilled by a
variety of possible events, so they predict the behaviour of a population of earthquakes. In contrast, statistical
models of foreshocks target a specific main shock3,4. But actually writing a definition for a single earthquake
is quite difficult4 and so at best these are predictions for one of a small population. Also, given the long time
between main shocks, it is difficult to test these predictions of individual events or small populations. The
other choice is to do a test over a broad area but then we are really testing the behaviour of the population.
The second part of level 2 continues with the prediction of specific events by using the concept of an
earthquake cycle based on the elastic rebound theory8. The use of this cycle, which I shall refer to as level
2b, led from plate tectonics through seismic gaps and on to time-dependent hazard analysis based on a
probabilistic model of the time between earthquakes on a fault segment. To achieve level 1 we need only
know the average rate of earthquakes in a region. To achieve level 2b we must assign those rates to specific
fault segments, determine the date of the last event on each fault segment, and choose an earthquake
recurrence model. Determining the history of earthquakes on a fault segment is especially difficult in areas
such as California, where the historic record is short compared with the average time between major
earthquakes. It is also difficult in areas in which the historic record is longer. Although we might know when
past earthquakes occurred, we might not know on which fault they occurred. Palaeoseismology, with fault
trenches and tree ring studies, attempts to address these questions, but its success varies depending on local
conditions.
A few consensus reports have been issued that include time-dependent hazard estimates for California9,10; an
update to the estimates for Northern California is currently under way. Although these analyses attempt to
predict individual model events, there is so much uncertainty in the individual predictions that the results are
presented as a probabilistic sum over many models. A further problem with level 2b is that these predictions
might be impossible to test during our lifetimes. Thus, our faith in these predictions relies on our ability to
test the components that went into them and our faith in the 'experts' who must make somewhat arbitrary
choices when assembling these components. Although the quality of these predictions is debatable, their
impact is clearer. Widespread release of earthquake hazards estimates in the San Francisco Bay area have led
businesses and governments to spend hundreds of millions on earthquake preparedness11.
Level 3, the use of precursors, could lead to the prediction of either individual events or the behaviour of the
population depending on how large an area the precursors cover. Given that years of effort have led to no
widely accepted precursors, perhaps there are no valid earthquake precursors. Or have our efforts been too
weak to find them? Although Ian Main asserts that the effort to find precursors has been enormous, it has
used only a few per cent of the US earthquake research budget. This limited effort has allowed a wide variety
of dense instrumentation to be installed in very few areas, and these areas have not yet experienced a large
event12,13. Although the level of effort must be considered against other seismological and societal goals, it is
impossible to rule out the existence of precursors on the basis of a lack of observations.
Another option is to show that there cannot be any valid earthquake precursors because the system is simply
too chaotic. This would also rule out level 4: the deterministic prediction of individual events. For instance,
if when an earthquake begins there is no way of suggesting how large it will become, prediction will be very
difficult. Laboratory faults display a slow nucleation process and some recent work suggests a slow14,
magnitude-proportional15 nucleation process for real faults, but this remains controversial16,17. Other fruitful
topics for further research include understanding the frictional behaviour of faults and why they are so much
weaker than simple laboratory models18,19. The predictability of the weakening mechanism might affect our
view of how predictable the entire system is. For instance, opening-mode vibrations20 might be more
predictable than the collapse of high-pore-fluid compartments21,22. Until we understand better the basic
processes of real faults it is too early to say that we will not improve on our current predictive capability.
And our knowledge might improve with new observations such as those made in deep drill holes24.
In conclusion, scientists are now making societally useful predictions based on both the behaviour of the
population of earthquakes and of individual events, although these predictions are best posed in terms of at
least small populations. Progress in this field might be difficult but we should heed Sir Peter Medawar's
advice25: "No kind of prediction is more obviously mistaken or more dramatically falsified than that which
declares that something which is possible in principle (that is, which does not flout some established
scientific law) will never or can never happen."
Andrew Michael
United States Geological Survey, Menlo Park, California, USA
References
1. Reasenberg, P.A. & Jones, L.M. California aftershock hazard forecasts. Science . 247 , 345-346
(1990)
2. Reasenberg, P.A. & Jones, L.M. Earthquake hazard after a mainshock in California. Science 243,
1173-1176 (1989)
3. Agnew, D.C. & Jones, L.M. Prediction probabilities from foreshocks. J. Geophys. Res. 96, 1195911971 (1991)
4. Michael, A.J. & Jones, L.M. Seismicity alert probabilities at Parkfield, California, revisited. Bull.
Seism. Soc. Am. 87, 117-130 (1998).
5. Dieterich, J. A constitutive law for rate of earthquake production and its application to earthquake
clustering. J. Geophys. Res. 99, 2601-2618 (1994).
6. King, G.C.P., Stein, R.S. & Lin, J. Static stress changes and the triggering of earthquakes: The 1992
Landers, California, earthquake sequence. Bull. Seism. Soc. Am. 84, 935-953 (1994)
7. Stein, R.S., King, G.C.P. & Lin, J. Stress triggering of the 1994 M=6.7 Northridge, California,
earthquake by its predecessors. Science 265, 1432-1435 (1994)
8. Reid, H.F. The California earthquake of April 18, 1906; the mechanics of the earthquake, Vol. 2, 192
(Carnegie Inst. Wash. Pub. 87, 1910)
9. Working Group on California Earthquake Probabilities. Probabilities of large earthquakes occurring
in California on the San Andreas fault. (U.S. Geol. Survey Open-File Rep. 88-398, 1988)
10. Working Group on California Earthquake Probabilities. Probabilities of large earthquakes in the San
Francisco Bay region, California. (U.S. Geol. Survey Circular 1053, 1990)
11. Bakun, W.H. Pay a little now, or a lot later. (U.S. Geol. Survey Fact Sheet 169-95, 1995)
12. Bakun, W.H. & Lindh, A.G. The Parkfield, California, earthquake prediction experiment. Science
229, 619-624 (1985)
13. Roeloffs, E.A. & Langbein, J. The earthquake prediction experiment at Parkfield, California. Rev.
Geophys. 32, 315-336 (1994)
14. Iio, Y. Slow initial phase of the P-wave velocity pulse generated by microearthquakes. Geophys. Res.
Lett. 19, 477-480 (1992)
15. Ellsworth, W.L. & Beroza, G.C. Seismic evidence for an earthquake nucleation phase. Science 268,
851-855 (1995)
16. Mori, J.J. & Kanamori, H. Initial rupture of earthquakes in the 1995 Ridgecrest, California sequence.
Geophys. Res. Lett. 23, 2437-2440 (1996)
17. Ellsworth, W.L. & Beroza, G.C. Observation of the seismic nucleation phase in the Ridgecrest,
California, earthquake sequence. Geophys. Res. Lett. 25, 401-404 (1998)
18. Brune, J.N., Henyey, T.L. & Roy, R.F. Heat flow, stress, and rate of slip along the San Andreas fault,
California. J. Geophys. Res. 74, 3821-3827 (1969)
19. Lachenbruch, A.H. & Sass, J.H. Heat flow and energetics of the San Andreas fault zone: Magnitude
of deviatoric stresses in the Earth's crust and uppermost mantle. J. Geophys. Res. 85, 6185-6222
(1980)
20. Anooshehpoor, A. & Brune, J.N. Frictional heat generation and seismic radiation in a foam rubber
model of earthquakes: Faulting, friction, and earthquake mechanics; Part 1. Pure Appl. Geophys. 142,
735-747 (1994)
21. Rice, J.R. in Fault Mechanics and Transport Properties in Rocks (eds. Evans, B. & Wong, T.-F.)
475-503 (Academic, 1992)
22. Byerlee, J. Friction, overpressure and fault normal compression. Geophys. Res. Lett. 17, 2109-2112
(1990)
23. Byerlee, J. Model for episodic flow of high pressure water in fault zones before earthquakes. Geology
21, 303-306 (1993)
24. Hickman, S., Zoback, M., Younker, Y. & Ellsworth, W. Deep scientific drilling in the San Andreas
fault zone. Eos 75, 137-142 (1994)
25. Medawar, P. B. Pluto's Republic (Oxford Univ. Press, London, 1982)
Earthquake prediction: feasible and useful?
CHRISTOPHER SCHOLZ
There has been a recent recrudescence1,2 of the long debate on the feasibility of short-term earthquake
prediction, namely, the prediction, with a lead time of days to weeks, of the time, location and magnitude of a
future event. This type of earthquake prediction is inherently difficult to research and has a chequered past,
with many intriguing but fragmentary observations of possibly precursory phenomena but no scientifically
based and verified successes3.
The current debate has taken the matter further, with the assertion, based on two arguments, that such prediction is
intrinsically impossible. The first argument is that the Earth is in a state of self-organized criticality (SOC),
everywhere near the rupture point, so that earthquakes of any size can occur randomly anywhere at any time.
SOC refers to a global state, such as that of the whole Earth or a large portion of it containing many earthquake
generating faults with uncorrelated states. However, to paraphrase what Tip O'Neil, the late Speaker of the US House
of Representatives, said about politics, earthquake prediction is always local.
This point is illustrated in Fig. 1, which shows the canonical sandpile model of SOC. The pile is built by a rain of
sand and, when its sides reach the critical angle of repose (Fig. 1A), landslides of all sizes begin to occur. If we focus
now on only one sector of the sandpile, there will occasionally occur a system-sized landslide (Fig. 1B), which
brings the local slope well below the angle of repose. No landslides can then occur in this locality until the slope is
built back up to the angle of repose. It is the problem of long-term earthquake prediction to estimate when this will
occur
Figure 1 The sandpile model of self-organized criticality.
High resolution image and legend (210k)
In earthquake prediction research, this is known as the 'seismic gap' hypothesis. A test of this hypothesis4, which had
negative results, was flawed because it used earthquakes that were smaller than system-sized and took only a bite out
of the side (Fig. 1C), which clearly does not preclude subsequent local earthquakes.
Their second argument is based on the conjecture that an earthquake cannot 'know' how big it will become because
that depends entirely on initial conditions (local state of stress and strength of the fault). This will prevent the
earthquake magnitude from being predicted even if one could sense its nucleation (which friction theory predicts
might be detectable days or weeks before the earthquake instability5).
Could this conjecture be false? There are observations that indicate that the size of foreshock zones, and a precursory
slip phase of earthquakes, which might map the nucleation region, scale with the size of the subsequent mainshock6,7.
Thus the detection of the nucleation zone size might allow the prediction of the size of the subsequent earthquake.
If, however, this conjecture is true, can it preclude the prediction of the earthquake's size? No, but the problem would
then change; it would require determining the initial conditions, namely the size of the region around the nucleation
zone that is loaded near the critical state. Other methods, such those espoused in the 'dilatancy-diffusion' theory of
earthquake prediction8, might make that possible.
Therefore, although we do not have a method for making short-term predictions, I do not believe it is justified to
assert that it is impossible. What, then, can we say about other types of earthquake prediction: their feasibility and
utility?
Long-term prediction, which is the estimate, on a decadal time scale, of the probable failure time of segments of
active faults, is now an established part of seismic hazard analysis9. On the basis of that methodology, several studies
forecast the 1989 Loma Prieta, California, earthquake in the six years before that event10. The utility of this kind of
prediction is that with a decadal lead time, it can guide engineering and emergency planning measures to mitigate the
impact of the earthquake. An intermediate-term prediction is an update of the long-term prediction brought about by
an increase in seismicity (Fig. 1D) or some other indicator that the fault is near its failure point.
In another type of prediction, an Immediate Alert, seismic waves above a certain threshold send an electronic alert,
which, with a lead time of several seconds, can be used for such things as shutting down nuclear reactors, gas and
electricity grids, and the like. A system like this is in use in Japan to stop high-speed trains in the event of an
earthquake.
Finally, the finding that earthquakes often trigger other earthquakes on nearby faults leads to another prediction
model, which might be called a post-earthquake seismic hazard reassessment. In this methodology, shortly after a
large earthquake the resulting stress changes are calculated on all nearby faults and warnings issued about those
faults that have been brought closer to failure by the preceding earthquake11.
What, then, should we do about short-term earthquake prediction? Should we declare it impossible and banish it
from our minds? I think not: there is much yet to be learned about earthquake physics, and rapid progress is being
made, particularly in the applications of the rate/state variable-friction laws to the problem12. Until now we have
been working in the dark, with the only observables being the earthquakes themselves. Dense permanent global
positioning system (GPS) networks are presently being installed in California and Japan and elsewhere that, together
with satellite radar interferometry, will allow us to view for the first time the evolution of strain fields in space and
time. Who knows what might turn up? Then there are the curious 'precursory' phenomena, which continue to be
serendipitously observed from time to time. What could their mechanism be?
Christopher H. Scholz
Lamont-Doherty Earth Observatory, Columbia University, Palisades, New York, USA
References
1. Main, I.G. Long odds on prediction. Nature 385, 19-20 (1997).
2. Geller, R.J., Jackson D.D., Kagan, Y.Y. & Mulargia, F. Earthquakes cannot be predicted. Science 275, 16161618 (1997).
3. Scholz, C.H. Whatever happened to earthquake prediction. Geotimes, pp. 16-19, March (1997).
4. Kagan, Y.Y. & Jackson, D.D. Seismic gap hypothesis: ten years after. J. Geophys. Res. 96, 21419-21431
(1991).
5. Dieterich, J.H. & Kilgore, B. Implications of fault constitutive properties for earthquake prediction. Proc.
Natl Acad. Sci. USA 93, 3787-3794 (1996).
6. Dodge, D.A., Beroza, G.C. & Ellsworth, W.L. Detailed observation of California foreshock sequences:
implications for the earthquake initiation process. J. Geophys. Res. 101, 22371-22392 (1996).
7. Ellsworth, W.L. & Beroza, G.C. Seismic evidence for an earthquake nucleation phase. Science 268, 851-855
(1995).
8. Scholz, C.H., Sykes, L.R. & Aggarwal, Y.P. Earthquake prediction, a physical basis. Science 181, 803-809
(1973).
9. Working group on California Earthquake Probabilities. Probabilities of Large Earthquakes Occurring in
California on the San Andreas Fault (U.S. Geol. Surv. Open-file Rep. 88-398, 1988).
10. Harris, R.A. Forecasts of the 1989 Loma Prieta, California, earthquake. Bull. Seismol. Soc. Am. 88, 898-916
(1998).
11. Toda, S., Stein, R.S., Reasenberg, P.A., Dieterich, J.H. & Yoshida, A. Stress transfer by the 1995 Mw 6.9
Kobe, Japan, shock: effect on aftershocks and future earthquake probabilities. J. Geophys. Res. 103, 2454324565 (1998).
12. Scholz, C.H. Earthquakes and friction laws. Nature 391, 37-42 (1998).
Earthquake prediction is difficult but not impossible
LEON KNOPOFF
For a prediction to be successful, the probability of occurrence in a time interval and a space domain
must be specified in advance, as must the lower magnitude. There are two important additional
constraints: a utilitarian constraint demands that the lower magnitude bound be appropriate to
societal needs; in other words, we are especially interested in strong destructive earthquakes.
The time intervals for societal needs in the developing countries are of the order of days, but in the
developed countries the windows can be broader, even of the order of years, because the response can be one
of marshalling resources to improve construction, for example. A second constraint is that we must guard
against self-indulgence: if the time or space windows are made too broad, or the magnitude threshold is
made too low, then we can increase the probability of success up to 100% without any serious effort on our
part (as, equally, will a Poisson random process). To avoid this problem we must specify how our probability
estimate for the window compares with the poissonian estimate.
Despite our assertions about the desirability of probabilistic estimates the problem is not statistical. There
have been too few large enough events in any small sufficiently area in the past century to be able to define
probabilities of the largest events sufficiently accurately.
Cyclic inferences
There are two ways in which to proceed. One is to study the time intervals between earthquakes in the region
in this magnitude scale. If earthquakes are periodic, the problem is solved. Current estimates of interval times
through measurements by global positioning by satellite (GPS) of rates of slip, coupled with geological
estimates of slips in great earthquakes, give only average values of interval times. However, from
palaeoseismicity, we find that the interval times for the strongest earthquakes at one site on the San Andreas
fault have large variability1. The statistical distribution of these interval times is poorly identified even in
this, the best of cases. And a long duration since the last occurrence is no guarantee that the next event is
imminent; the next event could be farther in the future2, as Ian Main has also noted. The conclusion depends
on the actual interval time distribution, which is unknown.
The failure of the Parkfield prediction is a case in point: extrapolation from a brief set of interval times was
insufficient to provide adequate information about the distribution of interval times. The variability of
interval times is due to the influence of earthquakes on nearby faults; the earthquakes on a given fault cannot
be taken as occurring as though they were independent of the activity on the other faults in the
neighbourhood. Without information about the distribution of interval times, an earthquake prediction
programme based only on GPS and short runs of palaeoseismicity must fail; the average values of slips and
slip rates alone are not sufficient to solve the problem, but they comprise one of several pieces of information
important to the prediction problem. Indeed, it is only on some faults that we have information about the date
of the most recent sizable earthquake. What is lacking in this version of the programme is a theoretical effort
to understand the distribution of interval times in one subarea due to earthquakes on an inhomogeneous
network of inhomogeneous faults and subfaults, a modelling problem of considerable difficulty.
De novo prediction
The second and more attractive approach is to search for the immediate precursors of strong earthquakes.
Here there have been many culs-de-sac: searches for foreshocks, tilts, radon, electrical precursors and
variations in velocity ratios of P-waves to S-waves have either failed or are at best unproven. In general,
these efforts (a) failed to restrict the problem to the study of large earthquakes and (b) failed to evaluate
seriously the success in units of poissonian behaviour. In many cases the invalid assumption was made that
one could use the prediction of small earthquakes as a proxy for the prediction of large ones.
Part of the blame for the use of the assumption can be put at a misinterpretation of the Gutenberg-Richter
magnitude frequency distribution. The illusion of the G-R distribution is that there are no characteristic scale
sizes except for the largest-magnitude events that a region can support. We now know that there are at least
three subscales in the Southern California distribution: the central trapped-mode core of the fracture in the
largest earthquakes has a dimension of the order of 100-200 m (ref. 3); the dimension of the zone of
aftershocks astride a large fracture is of the order of 1-3 km; and the thickness of the brittle seismogenic
layer is of the order of 15 km. (Space limitations do not allow me to discuss the cause of the apparent loglinearity of the G-R distribution in the presence of characteristic length scales4.)
Because of the wealth of scales, the 'prediction' of earthquakes at a smaller scale to understand larger ones
cannot be valid. The assumption that we can amplify our data set by a study of large earthquakes worldwide
is also not tenable, because of the large variability of the faulting environment for the largest earthquakes
from region to region.
Statistics of rare events
The small number of events means that again we need a physics-based theory of the precursory process to
amplify the meager data. In the area of physics, another blind alley was followed. The beguiling
attractiveness of the illusion of scale-independence of the G-R law suggested that the model of selforganized criticality (SOC), which also yielded scale-independent distributions, might be appropriate. (The
logic is evidently faulty: if mammals have four legs, and tables have four legs, it does not follow that tables
are mammals, or the reverse.) The model of SOC permits a hierarchical development of large events out of
the nonlinear interaction of smaller events, at rates in relation to their sizes, and culminating in the largest
event. However, there are several important arguments against the applicability of SOC to the earthquake
problem.
1. Faults and fault systems are inhomogeneous: we have already noted the presence of several scale
sizes.
2. Seismicity at almost all scales is absent from most faults, before any large earthquake on that fault;
the San Andreas Fault in Southern California is remarkably somnolent at all magnitudes on the
section that tore in the 1857 earthquake.
3. There is no evidence for long-range correlations of the stress field before large earthquakes.
I do not see that the salient properties of SOC that are requisites for its application are reproduced in the
earthquake data.
It is now time to develop a sound physics-based theory of the precursory process that takes us away from
simplistic models. Such a theory should study the organization of seismicity on the complex geometry of
faults and fault systems, and should bring to bear the properties of rocks under high deformational stress and
under realistic loading and unloading rates. It is impossible to anticipate catastrophic failure on a purely
elastic-loading/brittle-fracture model of rupture. As it has been for nearly 60 years5, the detection of nonelastic deformation under high stress before fracture is the most promising avenue for the detection and
identification of precursors. The nucleation of the largest earthquakes on inhomogeneous faults will take
place at sites of greatest compressional strength, which are of geometrical origin6. These localized sites are
those most likely to display precursory accelerated slip. The tasks of identifying these sites in advance and of
measuring the deformation at them are not easy, even for impending large earthquakes. The task of
identifying faults and measuring slip on them before remote small earthquakes, such as the recent Armenia,
Colombia, event, does not seem to be possible at present.
In my opinion, fluctuations in seismicity are not active agents that participate in a process of selforganization toward large events. Rather, they serve as qualitative stress gauges to indicate that regions of
the Earth's crust are in a state of high stress or otherwise. We have used fluctuations in the rates of
occurrence of intermediate-magnitude earthquakes to construct hindsight predictive techniques7 that are
successful at about the 80% level (with large error bars) and represent an improvement over poissonian
estimates of the order of 3:1 for a region the size of Southern California, with time constants of the order of
10 years, and with a magnitude threshold around 6.8. This is not much progress, but it is a step in the right
direction.
Challenges not insolubles
The recent paper by Geller et al.8 is in error on two counts. First, it states that the model of SOC shows that
earthquakes are unpredictable. In fact, SOC 'predicts' stresses more readily than do chaotic systems. I have
indicated above that the model of SOC is inapplicable to earthquakes on several counts: the data fail to show
scale independence, the data fail to show long-range correlations in the stress field, and individual faults are
remarkably inactive before large earthquakes.
Second, the paper8 states that the problem is too difficult, and we should therefore give up trying. I believe
the opposite. The community has indeed tried the seemingly easy methods, and they have failed. For 25
years the leadership of our national programmes in prediction have been making the assumption that the
problem is simple and will therefore have a simple prescriptive solution.
We have been guilty of jumping on bandwagons without asking the basic questions, "What is an earthquake?
What determines its size, and why is it likely to occur where and when it does?" These are physics questions;
they are not likely to be solved by statistically unsubstantiable means. We have so far been unsuccessful at
prediction because laboratory and theoretical studies of the physics of deformation and fracture have been
largely unsupported. The problem is not simple; however, that does not mean it is insoluble. As I have
indicated, there are weak solutions at present for large space-time windows. The short-term problem is much
more difficult.
Leon Knopoff
Institute of Geophysics and Planetary Physics, University of California, Los Angeles, California, U.S.A.
References
1. Sieh, K., Stuiver, M. & Brillinger, D. A more precise chronology of earthquakes produced by the San
Andreas Fault in Southern California. J. Geophys. Res. 94, 603-623 (1989).
2. Sornette, D. & Knopoff, L. The paradox of the expected time until the next earthquake. Bull. Seismol.
Soc. Am. 87, 789-798 (1997).
3. Li, Y.G., Aki, K., Adams, D., Hasemi, A. & Lee, W.H.K. Seismic guided waves trapped in the fault
zone of the Landers, California, earthquake of 1992. J. Geophys. Res. 99, 11705-11722 (1994).
4. Knopoff, L. b-values for large and small Southern California earthquakes (to be submitted); The
distribution of declustered earthquakes in Southern California (to be submitted).
5. Griggs, D.T. Experimental flow of rocks under conditions favoring recrystallization. Bull. Geol. Soc.
Am. 51, 1001-1022 (1940).
6. Nielsen, S.B. & Knopoff, L. The equivalent strength of geometrical barriers to earthquakes. J.
Geophys. Res. 103, 9953-9965 (1998).
7. Knopoff, L., Levshina, T., Keilis-Borok, V.I. & Mattoni, C. Increased long-range intermediatemagnitude earthquake activity prior to strong earthquakes in California. J. Geophys. Res. 101, 57795796 (1996).
8. Geller, R.J., Jackson, D.D., Kagan, Y.Y. & Mulargia, F. Earthquakes cannot be predicted. Science
275, 1616-1617 (1997).
Earthquake Prediction: What should we be debating?
ROBERT J. GELLER
The topic posed by the editors for this debate, is whether the reliable prediction of individual earthquakes is
a realistic scientific goal. Translated into everyday language, this becomes: given the present state of the
art, does earthquake prediction research merit a significant investment of public funds? My initial
contribution to this debate stated the negative case. None of the other debaters appears to have made
strong arguments for the affirmative.
Mission: Impossible?
The arguments presented by some of the other debaters are variations on the following theme:
1. Prediction has not yet been shown to be impossible.
2. Some other things that were called impossible later turned out to be possible.
3. So, why shouldn't prediction also turn out to be possible?
However, convincing supporting data, not just the mere fact that impossibility has not yet been proven, should be
required before any proposed research is approved. This is particularly true for fields like prediction1, or cold
fusion2, where previous work has been notoriously unsuccessful. Note that we do not have to decide whether or
not prediction is inherently impossible. We just have to decide whether or not there are compelling grounds at
present for establishing large-scale ("throw money at the problem") programmes for prediction research. The
answer given current knowledge is clearly negative, but the question could, if necessary, be reopened at any time
if new proposals were backed by well documented and convincing results.
The Gambler's Fallacy
Simple mechanical systems used in games of chance provide cautionary examples for prediction researchers. If an
honest die is rolled, the probability of each number's landing face up is 1 in 6, but (due to sensitivity to small
variations in the initial conditions) it is impossible to reliably and accurately predict the outcome of individual
rolls.
Many gamblers nevertheless try in vain to look for patterns in the outcome of previous rolls of a die. Such
gamblers, as a group, lose their money, but for a short time a lucky few will win. By looking only at the winners,
while ignoring the much larger group of losers, it is easy to jump to the mistaken conclusion that the winners have
found a way to beat the odds.
The root cause of the gambler's fallacy is drawing conclusions from a retrospectively chosen and unrepresentative
sample of a much larger data set. This is also the fundamental problem bedevilling 'case studies' of alleged
earthquake precursors, but the fallacy here is a bit less obvious. This is because the probabilities for each roll of a
die are fixed, but the probability of earthquake occurrence is spatially variable, and varies strongly temporally
depending on previous seismicity.
A benchmark for prediction methods
The probability of earthquake occurrence is much larger than usual immediately after an earthquake occurs,
decaying with time as a power law3. This is the basis for the 'automatic alarm' prediction strategy4: issue an alarm
automatically after every earthquake above a certain size, on the chance that it might be a foreshock of a larger
earthquake.
The exact success and alarm rates of the automatic alarm strategy will depend on the choice of windows, but there
will probably be hundreds of false alarms for every success, and on the order of half the significant earthquakes
will probably be missed. Thus, as emphasized by its proposer4, this strategy is not in general sufficiently reliable
and accurate to justify issuing public alarms. (Probabilistic prediction of aftershocks -- see discussion by Michael
-- may be an exception where public alarms are justifiable.) Note that the 'automatic alarm' strategy is a
scientifically valid method for making forecasts in Main's category 2 (time-dependent hazard), although its actual
utility in hazard mitigation is unclear.
The automatic alarm strategy can be implemented at essentially no cost, as all we need are the hypocentral data
from a seismic network. No measurements of electromagnetic signals, radio-isotope levels in well water, or any of
the other phenomena that are sometimes claimed to be earthquake precursors are required. Although the automatic
alarm strategy falls far short of the accuracy and reliability required for issuing public alarms, it achieves a
significant probability gain over predictions issued completely at random. The automatic alarm strategy should be
adopted as the benchmark for testing other proposed prediction methods. Unless and until a proposed method has
been shown to outperform the automatic alarm strategy (none has ever been shown to do so), it does not warrant
intensive investigation.
Needed: objective testing, not case studies
What is wrong with present prediction research? Wyss cites scientifically weak work and scientifically
unqualified publicity seekers as problems. I agree5, but I do not think these are the main problems. The principal
problem appears to be the use of the anecdotal 'case study' approach by prediction researchers. At an early stage
this approach can be valuable, but there are now literally thousands of published claims of precursors1. The value
of further additions to this list is questionable.
Wyss's contribution to this debate cites both "increased moment release" (more small earthquakes than usual) and
"seismic quiescence" (fewer small earthquakes than usual) as precursors. Thus it appears that any variability
whatsoever in background seismicity can be claimed, after the fact, to have been a precursor. To determine
whether these variations in seismicity levels are random fluctuations or real physical phenomena, objective testing
of unambiguously stated hypotheses is required6.
It is regrettable that the other contributors to the first two weeks of this debate have not sufficiently acknowledged
the importance of objective statistical testing in resolving the prediction debate. Researchers looking for
precursors could greatly benefit from the experience of pharmaceutical research, where new drugs are routinely
evaluated by randomized double-blind testing using placebos7.
Long-term forecasts: where do we stand?
If they were reliable and accurate, long-term forecasts could be useful in engineering and emergency planning
measures to mitigate the impact of earthquakes. Unfortunately, however, there is serious question about the
accuracy and reliability of proposed methods for long-term seismicity forecasts. For example, several long-term
forecasts have been issued on the basis of the 'seismic gap' hypothesis. However, when these forecasts were later
subjected to independent testing, they were shown not to have outperformed random chance8. There has been a
running controversy over the seismic-gap forecasts; for information on both sides see the works cited in the
bibliography of ref. 8.
Scholz claims there were successful long-term forecasts of the 1989 Loma Prieta, California, earthquake.
However, this claim is dubious, as the long-term forecasts were for an earthquake on a different fault, and with a
different focal mechanism, than the actual earthquake (see section 4.4 of ref. 1). Furthermore, even if the claim of
'success' were warranted, this appears to be a classic example of the gambler's fallacy of picking one possibly
atypical sample out of a much larger dataset.
Scholz does not cite ref. 8, but does cite an earlier work by the same authors in his discussion of the seismic-gap
hypothesis. Scholz contends that Kagan & Jackson incorrectly rejected the seismic gap hypothesis because their
study considered some earthquakes that were too small. Scholz's criticism is apparently based on ref. 9, but
Jackson & Kagan replied to this criticism in ref. 10. Unfortunately Scholz discussed only the criticism but not the
reply.
It appears that the real problem is that the original forecasts did not fully specify the 'rules of the game', thus
forcing anyone who evaluates these forecasts to choose some of the rules in retrospect, after the actual seismicity
is already known. (In fairness, it must be added that the forecasts in question are among the best of their kind, as
they were stated with sufficient precision to be objectively tested, albeit with some ambiguity.) The only way to
avoid such problems in the future is for forecasters and independent evaluators to thoroughly thrash out all of the
ground rules at the time a forecast is issued, before the actual seismicity is known.
The downside of long-term forecasts
Until we have well validated methods, we should be reluctant to recommend that government authorities take
strong action on the basis of long-term forecasts, although no great harm and some good is likely to result from
taking sensible precautionary measures on a moderate scale in regions for which long-term forecasts have been
issued.
There is, however, a risk that the authorities in regions for which long-term forecasts have not been issued may
become overly complacent. This is not merely a theoretical possibility. Several hypothetical future earthquakes in
and around the Tokyo area have been the subject of extensive discussion in Japan for the past 25 years (see
section 5 of ref. 1). Partly as a result of these forecasts, local governments in western Japan, including Kobe,
incorrectly assumed that their region was not at significant risk, and failed to take sufficient precautionary
measures against earthquakes. This was one of the causes of the unexpectedly large damage caused by the 1995
Kobe earthquake.
The bottom line
Rather than debating whether or not reliable and accurate earthquake prediction is possible, we should instead be
debating the extent to which earthquake occurrence is stochastic. Since it appears likely that earthquake
occurrence is at least partly stochastic (or effectively stochastic), efforts at achieving deterministic prediction
seem unwarranted.
We should instead be searching for reliable statistical methods for quantifying the probability of earthquake
occurrence as a function of space, time, earthquake size, and previous seismicity. The case study approach to
earthquake prediction research should be abandoned in favour of the objective testing of unambiguously
formulated hypotheses. In view of the lack of proven forecasting methods, scientists should exercise caution in
issuing public warnings regarding future seismic hazards. Finally, prediction proponents should refrain from using
the argument that prediction has not yet been proven to be impossible as justification for prediction research.
Robert J. Geller
Department of Earth and Planetary Physics,
Graduate School of Science,
Tokyo University,
Bunkyo, Tokyo 113-0033,
Japan.
[email protected]
References
1. Geller, R.J. Earthquake prediction: a critical review. Geophys. J. Int. 131, 425-450 (1997) .
2. Huizenga, J.R. Cold Fusion: The Scientific Fiasco of the Century (University of Rochester Press,
Rochester, NY, 1992).
3. Kagan, Y. & Knopoff, L. Statistical short-term earthquake prediction. Science 236, 1563-1567 (1987).
4. Kagan, Y. VAN earthquake predictions: an attempt at statistical evaluation. Geophys. Res. Lett. 23, 13151318 (1996).
5. Geller, R.J. Predictable publicity. Astron. Geophys Quart. J. R. Astr. Soc. 38(1), 16-18 (1997).
6. Jackson, D.D. Hypothesis testing and earthquake prediction. Proc. Natl Acad. Sci. USA 93, 3772-3775
(1996).
7. Shapiro, A.K. & Shapiro, E. The powerful placebo (Johns Hopkins University Press, Baltimore, MD,
1997).
8. Kagan, Y.Y. & Jackson, D.D. New seismic gap hypothesis: Five years after. J. Geophys. Res. 100, 39433959 (1995).
9. Nishenko, S.P. & Sykes, L.R. Comment on "Seismic Gap Hypothesis: Ten years after" by Y.Y. Kagan and
D.D. Jackson. J. Geophys. Res. 98, 9909-9916 (1993).
10. Jackson, D.D. & Kagan, Y.Y. Reply. J. Geophys. Res. 98, 9917-9920 (1993).
Without funding no progress
MAX WYSS
The contributions to the debate about earthquake prediction research in Nature so far, clearly show
that we have hardly scratched the surface of the problem of how earthquake ruptures initiate and how
to predict them. This arises from the difficulty of the problem and the lack of a vigorous program to
study these questions. As Andrew Michael has said, funding for earthquake prediction research is a
small fraction of the seismology program, in the U.S., and seismology is poorly funded compared to
disciplines like astronomy.
Great efforts over the past 100 years?!
The contributions of Bernard, Michael and Scholz to this debate show that we have only a rudimentary
understanding of the physics of earthquake ruptures, of transients in the Earth's crust and of the possibility of
predicting earthquakes. They also point out that numerous crustal parameters may contain relevant
information, but that no generally accepted, irrefutably hard evidence exists for any of these that would allow
reliable earthquake prediction.
In this debate Geller repeats the exaggeration "Over the past 100 years, and particularly since 1960, great
efforts, all unsuccessful, have been made to find such hypothetical precursors." Such strong wording was not
acceptable in his recent article in the Geophysical Journal International1, because articles in that journal are
reviewed. The facts are that the first blue print on prediction research was not assembled until the mid 1960's
and that blue print was not followed. No prediction research program existed before the 1970s and after the
short flurry of activity in the mid 1970s, funding in the US and Europe dried up. Those of us who work in the
field of earthquake rupture or prediction, know from first hand experience that when seeking research
funding, the expression "earthquake prediction" in a research proposal to the NSF or the USGS will
guarantee that it will not be funded.
There is no question in my mind that we will make no serious progress toward learning how to predict
earthquakes, unless we assure high quality control in prediction research and start to fund it at a scale
comparable to the funding of astrophysical research.
The definition of earthquake prediction
The definition of "earthquake prediction" as one leading to "a planned evacuation" by the moderator of this
debate is not likely to be accepted, because social scientists warn that evacuations may do more harm than
good, and because an accepted definition exists. A valid earthquake prediction is any statement that specifies
•
•
•
•
Location ± uncertainty
Size ± uncertainty
Occurrence time ± uncertainty
Probability of the prediction being fulfilled2.
Since there exist a number of different types of consumers (individuals, officials, government agencies,
insurance and other companies, police and fire fighting departments), predictions with vastly different
uncertainties are of interest. The consumer can judge from the uncertainties, whether or not a given
prediction is useful. Insurance companies and those who make decisions on reinforcing old buildings are
more interested in long term predictions with large uncertainties, than in accurate short term predictions.
The engineering solution is not enough
Everyone, except perhaps some real estate developers, and builders of nuclear reactors and high dams, agree
that we should build according to strict codes assuring earthquake resistance. However, the great majority of
people will live and work for the next 50 years in buildings existing today and having been built when lax
codes were in force. The sad fact is that in most parts of the world there is no money available to reinforce
these buildings. Hence, long- and intermediate-term predictions as a motivating force for precautions1, as
well as short-term prediction, if attainable, are bound to benefit people significantly, if they are based on
sound science and responsibly announced.
If the current knowledge of the earthquake initiation process is so poorly founded that experienced
researchers can maintain the profound differences of opinion present in this debate, we are in desperate need
of the major research effort that is not1 at present being made.
Max Wyss
Geophysical Institute, University of Alaska, Fairbanks, Alaska, USA.
References
1. Geller, R. J. Earthquake prediction: A critical review, Geophys. J. Int. 131, 425-450 (1997).
2. Allen, C. R. Responsibilities in earthquake prediction, Bull Seism. Soc. Amer. 66, 2069-2074 (1976).
PASCAL BERNARD
In Bob Geller's contribution, the logical construction of the sentence "empiricism should not necessarily be
dismissed out of hands" presents the observational part of geophysics as a simple accessory to a better
understanding of geophysical processes, which may on occasion be useless. I totally disagree with such an
opinion. Observation is an absolutely necessary part of the game of science, as is the elaboration of theories with
which it continually interacts, guiding it or being guided by it. Rather one could say that "observation is a
necessary, but not sufficient approach to the problem".
Geller also claims that "a few laboratory failure experiments might seem to" suggest that earthquakes are
predictable, but that they are "conducted on a limited scale and do not replicate the complex and heterogeneous
conditions of the problem in situ". My impression is that the more heterogeneous and complex the medium, the
more we should expect to have detectable processes in preparation for large-scale failure. This is supported by
the fact that the breakage of a very homogeneous material such as glass is said to provide much weaker
warnings than rock samples before failure.
Geller goes on to say that "the only way to prove that something is impossible is by developing a satisfactory
theory of the underlying phenomenon". If one replaces "something" by "earthquake prediction", we should
conclude that Bob agrees with all other contributors to Nature's debate, including myself, who expressed a very
similar view. Indeed, we all agree that there is as yet no satisfactory theory about the nucleation of earthquakes.
This part of the debate should thus be closed here.
I do not see why the research on natural phenomena like geophysical transients (see my contribution) should
necessarily be "integrated in a research programme for the reduction of seismic hazard", as requested by Geller.
It may of course be linked to such a program because of this "prediction" rationale, but it has its own selfconsistent ways in terms of physics; observational and experimental tools to be developed; and hypotheses to be
tested. In addition, it may contribute to other important societal needs as in oil exploration (role of faults and
fluids), or engineering (failure of concrete, friction of tires, ...).
I, of course, totally agree with Geller's concluding remark that rigorous research methodology should be
followed. Intriguing crustal transients are observed: let us be curious, and try to understand them.
Pascal Bernard
Institut de Physique du Globe de Paris, France
E-MAIL CONTRIBUTIONS
One of this week's email contributors considers the consequences for earthquake predictions if they
are indeed self-organized critical phenomena.
PER BAK
In order to understand the level at which we can expect to predict earthquakes, it is important to understand
the dynamic nature of the phenomenon. Is it periodic? Is it chaotic? Is it random in space and time? Simple
mathematical modelling, and comparison with empirical observations indicate that we are dealing with a
self-organized critical phenomenon1-4. Using the notation of Pascal Bernard, these include O5, power law
distribution of earthquake size and O6, fractal, power law distribution of fault segments, mimicking the
highly inhomogeneous world-wide distribution of faults and fault-zones.
More interestingly, the earthquakes in SOC models are clustered in time and space, and therefore also
reproduce the observation O4. This may give the strongest support for the SOC hypothesis, since no
alternative models exhibiting this feature has been proposed. The distribution of waiting time between
earthquakes of a given time is T-α. It is this feature that allows for prediction of earthquakes at level 2. and 3.,
beyond the level of chance, in Main's notation. Ito5 has analysed a model previously introduced by Bak and
Sneppen6 in a different context. He found that the exponent α for actual earthquakes in California was well
represented by a waiting time exponent α=1.4, which compares well with the value obtained from the model,
α=1.5. This implies that the longer you have waited since the last event of a given size, the longer you still
have to wait; as noted in Main's opening piece, but in sharp contrast to popular belief!.
For the smallest time-scales, this represents foreshocks and aftershocks. For the longest time-scales this
implies that in regions where there have been no large earthquakes for thousands or millions of years, we can
expect to wait thousands or millions of years before we are going to see another one. We can 'predict' that it
is relatively safe to stay in a region with little recent historical activity, as everyone knows. There is no
characteristic timescale where the probability starts increasing, as would be the case if we were dealing with
a periodic phenomenon. The phenomenon is fractal in space and time, ranging from minutes and hours to
millions of years in time, and from meters to thousands of kilometers in space. This behaviour could hardly
be more different from Christopher Scholz's description that "SOC refers to a global state...containing many
earthquake generating faults with uncorrelated states" and that in the SOC state "earthquakes of any size can
occur randomly anywhere at any time".
Ironically, some real sandpiles7 exhibit the oscillatory phenomenon depicted by Scholz but this has nothing
to do with self-organized criticality! In fact, one of the independent arguments in favour of earthquakes as
SOC is the relatively small stress drop (3 MPa), independent of earthquake size, compared to the absolute
magnitude of the Earth's stress field at earthquake nucleation depths (300 MPa) (for review see ref. 8). Thus
the stress change is sufficiently small that this type of oscillatory behaviour (for sandpiles with large changes
in angle of repose) may be precluded.
Assuming that we are dealing with an SOC phenomenon, what can this tell us about the prospects of going
on from statistical prediction towards the level 5 of individual prediction? Unfortunately, the size of an
individual earthquake is contingent upon minor variations of the actual configuration of the crust of the
Earth8, as discussed in Main's introduction. Thus, any precursor state of a large event is essentially identical
to a precursor state of a small event. The earthquake does not "know how large it will become", as eloquently
stated by Scholz. Thus, if the crust of the earth is in a SOC state, there is a bleak future for individual
earthquake prediction. On the other hand, the consequences of the spatio-temporal correlation function for
time-dependent hazard calculations have so far not been fully exploited!
Per Bak
Department of Physics
Niels Bohr Institute
Blegdamevej 17
DK-2100 Copenhagen
References
1. Bak, P. How Nature Works. The Science of Self-organized Criticality (Copernicus, New York, and
Oxford University Press, Oxford, 1997).
2. Sornette, A. & Sornette, D, Self-organized criticality and earthquakes, Europhy. Lett. 9, 197 (1989).
3. Olami, Z., Feder, H. J. & Christensen, C. Self-organized criticality in a cellular automaton modeling
earthquakes, Phys. Rev. E 48, 3361-3372 (1993).
4. Bak, P. & Tang, C. Earthquakes as an SOC phenomenon, J. Geophys. Res. 94, 15635-15637 (1989).
5. Ito, K. Punctuated equilibrium model of evolution is also an SOC model of earthquakes, Phys. Rev. E
52, 3232-3233 (1995).
6. Bak, P. & Sneppen, K. Punctuated equilibrium and criticality in a simple model of evolution, Phys.
Rev. Lett. 71, 4083-4086 (1993).
7. Jaeger, H.M., Liu, C. & Nagel, S. Relaxation of the angle of repose. Phys. Rev. Lett 62, 40-43 (1989).
8. Main, I., Statistical physics, seismogenesis and seismic hazard, Rev. Geophys. 34, 433-462 (1996).
9. P. Bak and M. Paczuski, Complexity, Contingency, and Criticality, Proc. Natl Acad. Sci. 92, 66896696 (1995).
The status of earthquake prediction
DAVID D. JACKSON
What is it?
Earthquake prediction invites debate partly because it resists definition. To Ian Main's very helpful definitions I
would add that an earthquake forecast implies substantially elevated probability. For deterministic prediction, that
probability is so high that it justifies exceptional response (although not necessarily evacuation as Ian Main
suggests; evacuation is not generally envisaged as a response to earthquake warnings, and it would probably be
counter-productive even if future earthquakes could be predicted accurately.). Thus prediction demands high
certainty.
Forecasting and predicting earthquakes must involve probabilities. We can predict thunder after lightning without
talking of probabilities because the sequence is so repeatable. But earthquakes are more complex: we need
probabilities both to express our degree of confidence and to test that our forecasting is skilful (better than an
extrapolation of past seismicity).
What we can do
We can estimate relative time-independent hazard well (Japan is more hazardous than Germany) but our precision
is limited (Is Japan more hazardous than New Zealand?). Hazard statements are quantitative, but even after 30
years none of the models has been prospectively tested (for agreement with later earthquakes). We can estimate
well the long-term seismic moment rate (a measure of displacement rate integrated over fault area) but to estimate
earthquake rates we need to know their size distribution. There are very different ideas about how to do this1,2 but
none has been tested scientifically.
What we cannot do
We cannot specify time-dependent hazard well at all: in fact, we have two antithetical paradigms. Clustering
models predict that earthquake probability is enhanced immediately after a large event. Aftershocks provide a
familiar example, but large main-shocks also cluster3. The seismic gap theory asserts that large, quasi-periodic
'characteristic earthquakes' deplete stress energy, preventing future earthquakes nearby until the stress is restored4.
How could these antithetical models coexist? It is easy: there are many examples of each behaviour in the
earthquake record. So far, the seismic gap model has failed every prospective test. The 'Parkfield earthquake'5 has
been overdue since 1993, and a 1989 forecast6 for 98 circum-Pacific zones predicted that nine characteristic
earthquakes should have happened by 1994; only two occurred.
Our attempts at earthquake forecasting, as Ian Main defines it7, have failed. (Note that 'earthquake forecasting' is
often defined differently. Nishenko4 defined it to mean estimation of time-dependent earthquake probability,
possibly on a decade time scale, and not necessarily involving precursors.) Most studies of earthquake forecasting
assumed that precursors would be so obvious that estimates of background (unconditional) and anomalous
(conditional) probabilities were unnecessary. Hundreds of anomalous observations have been identified
retrospectively and nominated as likely precursors, but none has been shown to lead to skill in forecasting7. Given
the bleak record in earthquake forecasting, there is no prospect of deterministic earthquake prediction in the
foreseeable future.
What is the difficulty?
In principle, earthquakes might be predicted by one of two strategies: detecting precursors, or detailed modelling of
earthquake physics. For precursors, confidence would come from empirical observations; understanding
mechanisms would be desirable but not necessary. Earthquake physics involves modelling strain, stress and
strength, for example, in some detail.
The precursor strategy will not work because earthquakes are too complicated and too infrequent. Even if
precursors existed, a few observations would not lead to prediction, because their signature would vary with place
and time. This problem cannot be overcome simply by monitoring more phenomena such as electric, magnetic or
gravity fields, or geochemical concentrations. Each phenomenon has its own non-seismic natural variations.
Monitoring these phenomena without complete understanding is courting trouble. Monitoring them properly is a
huge effort with only a remote connection to earthquakes. Such studies would certainly unearth more examples of
anomalies that might be interpreted as precursors, but establishing a precursory connection would require
observations of many earthquakes in the same place.
Earthquake physics is an interesting and worthwhile study in its own right, but short-term earthquake prediction is
not a reasonable expectation. One idea is that high stresses throughout the impending rupture area might induce
recognizable inelastic processes, such as creep or stress weakening. Even if these phenomena occur they will not
lead to earthquake prediction, for several reasons. Earthquakes start small, becoming big ones by dynamic rupture.
The critically high stress needed to start rupture is not required to keep it going. The telltale signs, if they were to
exist, need affect only the nucleation point (several kilometres deep), not the eventual rupture area. Even very large
earthquakes cluster8, indicating that seismogenic areas are almost always ready. Earthquakes clearly respond to
stress changes from past earthquakes9, but the response is complex. For example, most aftershocks occur on planes
for which the shear stress should have been reduced by the main shock. Monitoring strain accumulation and
deducing the boundary conditions and mechanical properties of the crust will tell a lot about earthquakes and
perhaps allow us to predict some properties. To forecast better than purely statistical approaches would be in itself
a solid accomplishment, which must come long before deterministic prediction.
Part of our difficulty is a lack of rigour in Earth sciences. We examine past data for patterns (as we should) but we
pay very little attention to validating these patterns. Many of the patterns conflict: some contend that seismicity
increases before large earthquakes10, others that it generally decreases11. We generally explain exceptions
retrospectively rather than describe the patterns, rules and limitations precisely enough to test hypotheses.
What is possible?
Some argue that earthquakes possess a property known as self-organized criticality (SOC), so earthquakes cannot
be predicted because seismogenic regions are always in a critical state. But SOC would not preclude precursors.
For example, if lightning were governed by SOC, we could still predict thunder with a short warning time. Nothing
discussed above makes earthquake prediction either possible or impossible.
Others argue that SOC comes and goes and that outward signs of SOC (such as frequent moderate earthquakes)
provide the clue that a big earthquake is due. If SOC comes and goes, it is not clear how to recognize it. To be
useful, it must apply to big events only, and we would need many (rare) examples to learn how big they must be.
SOC would presumably appear gradually, so at any one time it might give at best a modest probability gain.
The important question is not whether earthquake prediction is possible but whether it is easy. Otherwise it is not a
realistic goal now, because we must learn earthquake behaviour from large earthquakes themselves, which visit too
infrequently to teach us.
What should be done?
Earthquake hazard estimation is the most effective way for Earth scientists to reduce earthquake losses. Many
outstanding scientific questions need answers: the most important is how to determine the magnitude distribution
for large earthquakes, which is needed to estimate their frequencies. Time-dependent hazard is worth pursuing, but
prospective tests are needed to identify the models that work. These tests should cover large areas of the globe, so
that we need not wait too long for earthquakes. For global tests we need global data, especially on earthquakes,
active faults and geodetic deformation.
Basic earthquake science is a sound investment for many reasons. Progress will lead to advancements in
understanding tectonics, Earth history, materials and complexity, to name just a few. Results will also benefit
hazard estimation. Wholesale measurements of phenomena such as electric fields with no clear relationship to
earthquakes will not help.
For real progress we need a methodical approach and a better strategy for testing hypotheses. We have good reason
to expect wonderful discoveries, but not deterministic prediction.
David D. Jackson
Southern California Earthquake Center,
University of California,
Los Angeles,
CA 90095-1567 USA
References
1. Wells, D.L. & Coppersmith, K.J. New empirical relationships among magnitude, rupture length, rupture
area, and surface displacement. Bull. Seism. Soc. Am. 84, 974-1002 (1994).
2. Kagan, Y.Y. Seismic moment-frequency relation for shallow earthquakes: regional comparison. J.
Geophys. Res. 102, 2835-2852 (1997).
3. Kagan, Y.Y. & Jackson, D.D. Long-term earthquake clustering. Geophys. J. Int. 104, 117-133 (1991).
4. Nishenko, S.P. Circum-Pacific seismic potential: 1989-1999. Pure Appl. Geophys. 135, 169-259 (1991).
5. Roeloffs, E. & Langbein, J. The earthquake prediction experiment at Parkfield, California. Rev. Geophys.
32, 315-336 (1994).
6. Kagan, Y.Y. & Jackson, D.D. New seismic gap hypothesis: five years after. J. Geophys. Res. 100, 39433960 (1995).
7. Geller, R.J. Earthquake prediction: a critical review. Geophys. J. Int. 131, 425-450 (1997).
8. Kagan, Y.Y. & Jackson, D.D. Worldwide doublets of large shallow earthquakes. Bull. Seis. Soc. Am.
(submitted).
9. Deng, J.S. & Sykes, L. Stress evolution in southern California and triggering of moderate- small- and
micro-size earthquakes. J. Geophys. Res. 102, 24411-24435 (1979), and references therein.
10. Ellsworth, W.L., Lindh, A.G., Prescott, W.H. & Herd, D.G. in Earthquake Prediction, An International
Review (eds Simpson, D. & Richards, P.) 126-140 (Am. Geophys. Un., Washington, D.C., 1981).
11. Ohtake, M., Matumoto, T. & Latham, G. in Earthquake Prediction, An International Review (eds Simpson
D. & Richards, P.) 53-62 (Am. Geophys. Un., Washington, D.C., 1981).
Without progress no funding
ROBERT J. GELLER
Wyss asserted that without funding at "a scale comparable to the funding of astrophysical research ...
serious progress toward learning how to predict earthquakes" was impossible. However, extensive
prediction efforts in several countries in several eras have all failed. Further allocation of public funds
appears unwarranted unless there are specific and compelling grounds for thinking that a proposed
new prediction programme will be successful.
In my first article in this debate I said that over the past 100 years, and particularly since 1960, there had
been great efforts, all unsuccessful, to find precursory phenomena that could be used to make reliable and
accurate predictions of earthquakes. Wyss claims that this statement is incorrect, but below I would like to
demonstrate its veracity.
A tale of two countries
In 1891 the Nobi (sometimes called Mino-Owari) earthquake caused significant damage in Japan. In
response, the Japanese government established the Imperial Earthquake Investigation Committee in 1892.
Imamura (Ref. 1, p. 346), a well-known seismologist, wrote as follows in 1937: "[The Committee] attacked
with every resource at their command the various problems bearing on earthquake prediction, such as earth
tiltings and earth pulsations, variation in the elements of terrestrial magnetism, variation in underground
temperatures, variation in latitude, secular variation in topography, etc., but satisfactory results were not
obtained".
J.B. Macelwane2, also a leading seismologist of his day (one of the major medals of the American
Geophysical Union is named in his honour), commented as follows in 1946. "The problem of earthquake
forecasting has been under intensive investigation in California and elsewhere for some forty years, and we
seem to be no nearer a solution of the problem than we were in the beginning. In fact the outlook is much
less hopeful."
Thus the existence of prediction research efforts before 1960 is supported by two leading authorities of the
era. One stated that a government body had attacked the prediction problem "with every resource at their
command" without obtaining satisfactory results, and another that "intensive investigations" "in California
and elsewhere for some forty years" had not led to any progress towards prediction.
Only in America?
Wyss says that "no prediction research program existed before the 1970s". Even if the efforts reported by
Imamura and Macelwane were disregarded, Wyss's statement would still be incorrect unless applied only to
work in the US.
Japan's prediction research program started in 19653, and the Soviet prediction research program started in
the Garm "polygon" (test field area for intensive geophysical observations) shortly after the 1948 Ashkhabad
earthquake4. These substantial efforts by qualified professionals should not be ignored just because they were
not in the US or western Europe.
Japan has spent about 2 x 1011 Yen on earthquake prediction since 1965 (Asahi Shinbun newspaper, 10
January 1998), but this programme has been unsuccessful5,6. Before adopting Wyss's suggestion of funding
prediction research "at a scale comparable to the funding of astrophysical research", US government
authorities should find out what went wrong in Japan. The set of possible explanations lies between two
extremes.
1. Owing to some unknown difference, the seismologists in Japan failed when their counterparts in the
US, would have succeeded if only they had had comparable funding.
2. The goals and methods of the programme were completely unrealistic.
Needless to say, I think (2) is correct. Nature's Tokyo correspondent appears to share my views7. As Wyss is
implicitly advocating position (1), he should explain his reasons for taking this view.
The bottom line
All of the debaters, including both Wyss and myself, agree that scientifically sound efforts to improve our
knowledge of the earthquake source process should be made. We can be cautiously optimistic that, in the
long run, such work may indirectly contribute to the mitigation of earthquake hazards. However, proposed
work in this area should be evaluated by the normal peer-review process and should not be labelled as
"earthquake prediction" research.
Robert J. Geller
Department of Earth and Planetary Physics,
Graduate School of Science,
Tokyo University,
Bunkyo,
Tokyo 113-0033,
Japan.
[email protected]
References
1. Imamura, A. Theoretical and Applied Seismology. (Maruzen, Tokyo, 1937).
2. Macelwane, J.B. Forecasting earthquakes. Bull. Seism. Soc. Am. 36, 1-4 (1946). (reprinted in
Geophys. J. Int. 131, 421-422, 1997).
3. Kanamori, H. Recent developments in earthquake prediction research in Japan, Tectonophysics 9,
291-300 (1970).
4. Savarensky, E.F. On the prediction of earthquakes. Tectonophysics 6, 17-27 (1968).
5. Geller, R.J. Earthquake prediction: a critical review. Geophys. J. Int. 131, 425-450 (1997).
6. Saegusa, A., Japan to try to understand quakes, not predict them. Nature, 397, 284 (1999).
7. Swinbanks, D. Without a way to measure their success, Japanese projects are very hard to stop.
Nature, 357, 619 (1992).
DAVID JACKSON
Scholz omitted crucial parts of the the recent history of the seismic gap forecast and test. He remarked
that our test of the gap theory was 'flawed' because it used earthquakes 'smaller than system-sized'. This
was also asserted in a published comment by Nishenko & Sykes1 and answered by Jackson & Kagan2. But
'system-sized' was never defined in the original seismic gap model3.
The model was widely used to estimate potential for earthquakes of magnitude 7.0 and larger, so we used this
threshold in our original test. The results of our test were essentially unchanged if we used larger events
(magnitude 7.5 and above) as recommended by Sykes & Nishenko. More importantly, a revised version of the
seismic gap model has been published4 that is much more specific and defines the magnitude of earthquake
appropriate to each seismic zone. Nishenko deserves much credit for stating the seismic gap model in testable
form. Unfortunately the new gap model also failed5 because it predicted far more earthquakes than observed in
the following five-year period. Now 10 years have elapsed with the same result.
Defining the 'system-sized' magnitude is a fundamental difficulty, not a semantic issue. Small earthquakes are
clearly clustered, but the seismic gap model posits that large 'system-sized' events have the opposite behaviour.
The definition becomes important because some different physics must take over for large events if the gap
hypothesis is true. The same difficulty exists for the sand-pile analogy, whether or not it describes earthquake
behaviour well. Small areas on the surface of a sand pile can suffer 'sandslides' even if they are not locally at a
critical slope, because slope failures above or below can affect them. Scholz's argument that a local area might
become immune by having recently slipped assumes that it is big enough to preclude upslope or downslope
failures. Identifying that particular size requires a knowledge of the state of the whole system, which is not
available in the earthquake analogy. The seismic gap model has no meaning without a definition of 'systemsized', and the model fails with the only specific definition offered so far.
David D. Jackson
Southern California Earthquake Center,
University of California,
Los Angeles, CA 90095-1567
USA
1. Nishenko, S.P. & Sykes, L.R. Comment on 'Seismic gap hypothesis: ten years after' by Y.Y. Kagan and
D.D. Jackson. J. Geophys. Res. 98, 9909-9916 (1993).
2. Jackson, D.D. & Kagan, Y.Y. Reply to Nishenko and Sykes. J. Geophys. Res. 98, 9917-9920 (1993).
3. McCann, W.R., Nishenko, S.P., Sykes, L.R. & Krause, J. Seismic gaps and plate tectonics: seismic
potential for major boundaries. Pure Appl. Geophys. 117, 1082-1147 (1979).
4. Nishenko, S.P. Circum-Pacific seismic potential: 1989-1999. Pure Appl. Geophys. 135, 169-259 (1991).
5. Kagan, Y.Y. & Jackson, D.D. New seismic gap hypothesis: five years after. J. Geophys. Res. 100, 39433960 (1995).
E-MAIL CONTRIBUTIONS
A case for intermediate-term earthquake prediction: don't throw
the baby out with the bath water!
DAVID BOWMAN & CHARLES SAMMIS
As anyone who has ever spent any time in California can attest, much public attention is being focused
on the great earthquake-prediction debate. Unfortunately, this attention focuses on deterministic
predictions on the day-to-week timescale. But as some of the participants in this debate have pointed
out1,2 current efforts to identify reliable short-term precursors to large earthquakes have been largely
unsuccessful, suggesting that earthquakes are such a complicated process that reliable (and
observable) precursors might not exist. That is not to say that earthquakes do not have some
'preparatory phase', but rather that this phase might be not be consistently observable by
geophysicists on the surface. But does this mean that all efforts to determine the size, timing and
locations of future earthquakes are fruitless? Or are we being misled by human scales of time and
distance?
As Robert Geller said in his earlier comments in this debate, 'the public, media and government regard an
"earthquake prediction" as an alarm of an imminent large earthquake, with enough accuracy and reliability to
take measures such as the evacuation of cities'. As Geller has pointed out on many occasions, this goal might
be too ambitious. However, according to the categories of earthquake prediction defined by Ian Main in the
introduction of this debate, most such efforts fall into category 4 (deterministic prediction). But what about
forecasting earthquakes on the year-to-decade scale? Although 'predictions' over this timescale might not
justify such drastic actions as the evacuation of cities, it would certainly give policy-makers as well as
individual citizens sufficient time to brace themselves for the impending event, in much the same way that
California was able to prepare itself for last winter's El Niño. With this paradigm in mind, forecasting on the
year-to-decade scale would be immensely useful.
In recent years there has been the suggestion that even this goal might be inherently impossible. Central to
this argument is the claim by many authors that the crust is in a continuous state of self-organized
criticality2,6 (and Per Bak's contribution to this debate). In the context of earthquakes, 'criticality' is defined
as a system in which the stress field is correlated at all scales, meaning that at any time there is an equal
probability that an event will grow to any size. If the system exhibits self-organized criticality, it will
spontaneously evolve to criticality and will remain there through dissipative feedback mechanisms, relying
on a constant driving stress to keep the system at the critical state. The implication of this model is that, at
any time, an earthquake has a finite probability of growing into a large event, suggesting that earthquakes are
inherently unpredictable.
However, this is contradicted by recent observations of the evolution of the static stress field after large
earthquakes. In one of the first studies on this subject7 it was found that the 1906 San Francisco earthquake
produced a 'shadow' in the static stress field that seemed to inhibit earthquakes for many years after the M =
7.9 event. After this work, several other studies observed stress shadows after numerous events including the
1857 Fort Tejon8,9 and 1952 Kern County8 earthquakes. An excellent review of these and other observations
of stress shadows after large earthquakes can be found in a recent issue of the Journal of Geophysical
Research special issue on stress triggers, stress shadows and implications for seismic hazard10.
In an earlier comment during this debate, Christopher Scholz discussed these stress shadows in the
framework of self-organized criticality (Fig. 1B in his comment), and mentioned that this concept is
equivalent to the 'seismic gap' hypothesis. However, it should be noted that recent years have seen the
proliferation of models11-17 that describe how the system emerges from these stress shadows. The hypothesis
for this viewpoint (which has come to be known as intermittent criticality) is that a large regional earthquake
is the end result of a process in which the stress field becomes correlated over increasingly long scale-lengths
(that is, the system approaches a critical state). The scale over which the stress field is correlated sets the size
of the largest earthquake that can be expected at that time. The largest event possible in a given fault network
cannot occur until regional criticality has been achieved. This large event then reduces the correlation length,
moving the system away from the critical state on its associated network, creating a period of relative
quiescence, after which the process repeats by rebuilding correlation lengths towards criticality and the next
large event.
The differences between these models for regional seismicity have important consequences for efforts to
quantify the seismic hazard in a particular region. Self-organized criticality has been used as a justification
for the claim that earthquakes are inherently unpredictable2. Models of intermittent criticality, in contrast, do
not preclude the possibility of discovering reliable precursors of impending great earthquakes. Indeed,
several modern models use this concept to predict observable changes in regional seismicity patterns before
large earthquakes16-18. It can be argued that models of intermittent criticality not only hold the promise of
providing additional criteria for intermediate-term earthquake forecasting methods but also might provide a
theoretical basis for such approaches.
Although models of intermittent criticality might promise improved methods for intermediate-term
earthquake prediction, we must be careful not to overstate their claims. Ideally, the scientific community and
the public at large should approach these methods much the same way as weather prediction. It should be
fully expected that forecasts will change through time, in much the same way that the five-day weather
forecast on the evening news changes. However, this will require a fundamental shift in the way we as Earth
scientists think about earthquakes. We must acknowledge that the Earth is a complicated nonlinear system
and that even the best intermediate-term forecasts cannot hold up to the standards imposed by Geller in his
comments earlier in this debate.
David D. Bowman and Charles G. Sammis
Department of Earth Sciences, University of Southern California, Los Angeles, USA.
References
1. Geller, R.J. Earthquake prediction: a critical review. Geophys. J. Int. 131, 425-450 (1997).
2. Geller, R.J., Jackson, D.D., Kagan, Y.Y. & Mulargia, F. Earthquakes cannot be predicted. Science
275, 1616-1617 (1997).
3. Sornette, A. & Sornette, D. Self-organized criticality and earthquakes. Europhys. Lett. 9, 197 (1989).
4. Bak, P. & Tang, C. Earthquakes as a self-organized critical phenomenon. J. Geophys. Res. 94, 1563515637 (1989).
5. Ito, K. & Matsuzaki, M. Earthquakes as self-organized critical phenomena. J. Geophys. Res. 95,
6853-6860 (1990).
6. Main, I., Statistical physics, seismogenesis, and seismic hazard. Rev. Geophys. 34, 433-462 (1996).
7. Simpson, R.W. & Reasenberg, P.A. in The Loma Prieta, California Earthquake of October 17, 1989-Tectonic Processes and Models (ed. Simpson, R.W.) F55-F89 (U.S. Geol. Surv. Prof. Pap. 1550-F,
1994).
8. Harris, R.A. & Simpson, R.W. In the shadow of 1857-the effect of the great Ft. Tejon earthquake on
subsequent earthquakes in southern California. Geophys. Res. Lett. 23, 229-232 (1996).
9. Deng, J. & Sykes, L.R. Evolution of the stress field in southern California and triggering of
moderate-size earthquakes. J. Geophys. Res. 102, 9859-9886 (1997).
10. Harris, R.A. Introduction to special section: stress triggers, stress shadows, and implications for
seismic hazard. J. Geophys. Res. 103, 24347-24358 (1998).
11. Sornette, D. & Sammis, C.G. Complex critical exponents from renormalization group theory of
earthquakes: implications for earthquake predictions. J. Phys. I 5, 607-619 (1995).
12. Saleur, H., Sammis, C.G. & Sornette, D. Renormalization group theory of earthquakes. Nonlin.
Processes Geophys. 3, 102-109 (1996).
13. Sammis, C.G., Sornette, D. & Saleur, H. in Reduction and Predictability of Natural Disasters (SFI
Studies in the Sciences of Complexity vol. 25) (eds Rundle, J.B. Klein, W. & Turcotte, D.L.) 143-156
(Addison-Wesley, Reading, Massachusetts, 1996).
14. Sammis, C.G. & Smith, S. Seismic cycles and the evolution of stress correlation in cellular
automaton models of finite fault networks. Pure Appl. Geophys. (in the press).
15. Huang, Y., Saleur, H., Sammis, C.G. & Sornette, D. Precursors, aftershocks, criticality and selforganized criticality. Europhys. Lett. 41, 43-48 (1998).
16. Bowman, D.D., Ouillon, G., Sammis, C.G., Sornette, A. & Sornette, D. An observational test of the
critical earthquake concept. J. Geophys. Res. 103, 24359-24372 (1998).
17. Jaumé, S.C. & Sykes, L.R. Evolving towards a critical point: a review of accelerating moment/energy
release prior to large and great earthquakes. Pure Appl. Geophys. (in the press).
18. Brehm, D.J. & Braile, L.W. Intermediate-term earthquake prediction using precursory events in the
New Madrid seismic zone. Bull. Seismol. Soc. Am. 88, 564-580 (1998).
E-MAIL CONTRIBUTIONS
On the existence and complexity of empirical precursors
FRANCESCO BIAGI
Earthquake prediction is strictly related to empirical precursors. Despite the results presented in
recent decades in support of the existence of empirical precursors there is scepticism in the scientific
community about whether they exist1-3. The widespread argument is that precursor signals reported
are unrelated to earthquake activity and that they could have occurred by chance. If this were true,
earthquake prediction would not be possible.
Since 1974 our group has been performing research on empirical precursors. Tilts, hydrogeochemicals,
electromagnetic emissions and radiowave disturbances have been investigated. We have reported results for
the Friuli earthquake4 (1976), the Umbria earthquake5 (1979), the Irpinia earthquake6 (1980), the Spitak
earthquake7,8 (1988) and the largest earthquakes that occurred in southern Kamchatka9,10 during the past
decade. Our field measurement and empirical data led us to suppose that there is an extremely small
possibility that the precursors detected occurred randomly and are unrelated to the earthquakes. But it seems
that the relationship linking earthquakes and premonitory anomalies is very complex and might be different
in relation to seismogenetic zones. Consequently no general rules can be assumed. The following main
aspects can be emphasized:
•
there are earthquakes that will produce no precursors in the geophysical and geochemical parameters
of a network, even if the earthquakes are large enough to be considered as potential sources of
precursors;
•
there are network sites in which one type of precursor will appear before some earthquakes and not
before others, although these earthquakes could be potential sources of precursors;
•
there are different premonitory anomaly forms both at different sites of a network for the same
earthquake and at the same site for different earthquakes.
These and other features are related to the anisotropy of the natural processes and it might therefore not be
possible to eliminate them.
The main problem in using precursors in earthquake prediction is to discover whether in a seismogenetic
area these features are totally random or whether there are significant recurrences. In the first case the
prediction of earthquakes is a null hypothesis; in the second case the prediction of some earthquakes might
be possible.
On the basis of 25 years of field research I believe that a satisfactory solution to this problem is still lacking.
More data must be collected and more geophysical and geochemical parameters must be tested.
Unfortunately, progress in this research area is connected with the occurrence of earthquakes. Many
earthquakes (considered as sources of precursors) are necessary for defining in a meaningful way the
relationship linking earthquakes and precursors in a seismogenetic area, but the occurrence of earthquakes
cannot be planned. As a result a deadline for the definition of the problem cannot be foreseen and might be
tens of years in the future.
In this framework, countries in which research on precursors is still encouraged and funded are very few.
Generally this research is prevented so that in Europe any reference to earthquake precursors in a scientific
proposal will guarantee that it will not be funded. Therefore, reputable and qualified scientists in this field
are boycotted a priori. Is this the right way to conduct science?
Pier Francesco Biagi
Physics Department, University of Bari, Bari, Italy.
References
1. Geller, R.J. Earthquake prediction: a critical review. Geophys. J. Int. 131, 425-450 (1997).
2. Geller, R.J., Jackson, D.D., Kagan, Y.Y. & Mulargia, F. Earthquakes cannot be predicted. Science
275, 1616-1617 (1997).
3. Stark, P.B. Earthquake prediction: the null hypothesis. Geophys. J. Int. 131, 495-499 (1997).
4. Biagi, P.F., Caloi, P., Migani, M. & Spadea, M.C. Tilt variations and seismicity that preceded the
strong Friuli earthquake of May 6th, 1976. Ann. Geofis. 29, 137 (1976).
5. Alessio, M. et al. Study of some precursory phenomena for the Umbria earthquake of September 19,
1979. Nuovo Cim. C 3, 589 (1980).
6. Allegri, L. et al. Radon and tilt anomalies detected before the Irpinia (South Italy) earthquake of
November 23, 1980 at great distances from the epicenter. Geophys. Res. Lett. 10, 269 (1983).
7. Areshidze, G. et al. Anomalies in geophysical and geochemical parameters revealed in the occasion
of the Paravani (M = 5.6) and Spitak (M = 6.9) earthquakes (Caucasus). Tectonophysics 202, 23-41
(1992).
8. Bella, F. et al. Helium content in thermal waters in the Caucasus from 1985 to 1991 and correlations
with the seismic activity. Tectonophysics 246, 263-278 (1995).
9. Bella, F. et al. Hydrogeochemical anomalies in Kamchatka (Russia). Phys. Chem. Earth 23, 921-925
(1998).
10. Biagi, P.F. et al. Hydrogeochemical anomalies in Kamchatka (Russia) on the occasion of the
strongest (M = 6.9) earthquakes in the last ten years. Nat. Hazards (in the press).
Realistic predictions: are they worthwhile?
ANDREW MICHAEL
It is certainly possible to define the reliable prediction of individual earthquakes so narrowly that
success is impossible. For instance, in Main's level 4 he refers to predictions with such precision and
accuracy that a planned evacuation can take place. None of the contributors have yet to suggest that
this is a possibility and I agree with Wyss that using this straw man as the standard will not lead to a
useful debate. However, Main's levels 2 and 3 may lead to socially useful tools regardless of whether
we call them predictions or probabilistic forecasts.
As Main's extremely accurate short-term predictions are impossible, the public should neither expect to be
saved from calamity by such predictions nor support research based on this expectation. However, further
research into earthquake prediction may well bring real social benefits even if they are less spectacular than
the vision of huge populations in mass exodus.
As discussed by Richard Andrews1, head of the California Office of Emergency Services, earthquakes are
the one natural disaster that currently allows for no advance warning. Storms approach, fires grow, floods
migrate after large rainfalls, but earthquakes can turn a perfectly normal day into a disaster in seconds. So, if
we can make low probability forecasts, short term (such as those currently based on foreshock and aftershock
models) what can society do with them?
Raising awareness
There are a number of low cost actions that can have large payoffs if a damaging earthquake occurs. Often
earthquake preparedness plans are not maintained as real world pressures focus attention onto other
problems. Low probability warnings can serve as reminders, like fire-drills, to update plans and review the
state of preparedness. For instance, childcare facilities might check their first aid supplies and review the
parents' emergency contact numbers. Companies might service their emergency generators.
Such actions can be very valuable, even if the predicted event comes later than expected2. Many hospitals
now store medical supplies offsite in order to make more efficient use of their main buildings. However, this
can create problems if an earthquake simultaneously causes casualties and cuts off transportation to the
storage areas. Under a low probability warning, additional supplies can be moved to the hospital at little cost.
Some unusual industrial processes, such as handling nuclear fuel rods, may be more difficult during an
earthquake and can be put off until a time of lower risk.
Low probability warnings also have an advantage over the high probability, deterministic, predictions that
Main gave as one end result. One frequent concern about earthquake prediction is the cost of false alarms.
Extreme actions like evacuations and shutting down businesses have great economic and social costs and
thus false alarms are very troubling. In contrast, low probability forecasts temporarily focus public attention
on reasonable preparedness activities and have not caused problems when carefully carried out in California.
The warnings issued by the California state government include recommendations for specific actions,
giving the public a method of dealing with the advisories without causing problems.
A need to know
Improvements in these low probability predictions might come from a continued search for precursors.
Geller3 suggests that this search has been vigorous but, at least in the US, the large networks of diverse
instrumentation dreamed of during the optimism of the 1960s4 was never realized. The result is that we have
few records of the strain and electromagnetic fields very close to earthquake ruptures. Current strain records
suggest that strain records can not observe the earthquake source preparation process from outside the
rupture zone5 but without data from within the rupture zone it is difficult to say that no precursors will be
found.
Even knowing that earthquake prediction is impossible would be useful. Amateur scientists will continue to
predict earthquakes and without swift, knowledgeable response from the scientific community these
predictions will do more harm than good6.
Proving that earthquakes are truly unpredictable will help us deal with the problems posed by less scientific
approaches. However, our current understanding of earthquake physics can not prove this point. For
instance, the majority of contributions to this debate have discussed self-organizing criticality models but
there is no agreement on what they imply for earthquake prediction or if they are a good explanation for
earthquakes (see contributions from Per Bak, David Bowman & Charles Sammis, and Chris Scholz).
Testing studies
As highlighted by Geller and Wyss an important concern is the quality of earthquake prediction research. As
Geller points out, we must be more careful to separate hypothesis development from hypothesis testing.
Earthquake prediction research is dominated by case studies which are good for hypothesis development, but
we often lack the global studies that are necessary for hypothesis tests. As also noted by Geller, Wyss cites
that some earthquakes are preceded by increased seismicity, some by quiesence, and some by neither.
Viewed as case studies this has lead to the development of both activiation and quiesence precursors. But,
viewed as a global hypothesis test, this assortment of differing observations suggests completely random
behaviour7.
Unless we can separate out when to expect each behaviour a priori, such precursors are useless. A similar
problem currently exists with those proposing that earthquakes can be predicted with 'time to failure
analysis', a version of the activation hypothesis with its roots in material science. While many case studies
have been presented, these are all hypothesis development. We now need a good hypothesis test but such
studies are unfortunately rare. Thus we need some way to encourage more researchers to undertake these
critical tests.
Certainly, earthquake prediction is extremely difficult, but it is possible that we will be able to improve our
ability to make low-probability, short-term forecasts and these may be much better for society than the high
probability ones that are most likely impossible. The trick will be to improve the quality of both the data
collected, particularly in the near-source region, and the work done with it.
Andrew Michael
United States Geological Survey, Menlo Park, California, USA
References
1. Andrews, R. The Parkfield earthquake prediction of October 1992: the emergency services response.
Earthquakes and Volcanoes 23, 170-174 (1992).
2. Michael, A. J., Reasenberg, P., Stauffer, P. H. & Hendley, J. W., II. Quake forecasting - an emerging
capability. USGS Fact Sheet 242-95, 2 (1995).
3. Geller, R. J. Earthquake prediction; a critical review. Geophys J. Int. 131, 425-450 (1997).
4. Press, F. et al. Earthquake prediction: a proposal for a ten year program of research. Ad Hoc Panel
on Earthquake Prediction. (Office of Science and Technology, Washington, D.C., 1965).
5. Johnston, M. J. S., Linde, A. T., Gladwin, M. T. & Borcherdt, R. D. Fault failure with moderate
earthquakes. Tectonophysics 144, 189-206 (1987).
6. Kerr, R. A. The lessons of Dr. Browning. Science 253, 622-623 (1991).
7. Matthews, M. V. & Reasenberg, P. A. Statistical methods for investigating quiescence and other
temporal seismicity patterns. Pure Appl. Geophys 126, 357-372 (1988).
Sociological aspects of the prediction debate
ROBERT J. GELLER
The question at the heart of this debate appears to be whether earthquake prediction should be recognised as a
distinct and independent research field, or whether it is just one possible research topic in the general field of
study of the earthquake source process. As there are no known grounds for optimism that reliable and
accurate earthquake prediction (as defined in my first article) can be realized in the foreseeable future, the
case for the latter position appears clear-cut. As a consequence there is no obvious need for specialised
organisations for prediction research. Besides the benefits that always accrue from pruning deadwood,
abolition of such organisations would force prediction proponents and critics to confront each other in
common forums, thereby speeding the resolution of the controversy.
Paradigms Lost?
A specialized community of scientists, which has its own journals, meetings, and paradigms, is the arbiter of what is
acceptable in its own field1. Viewed in sociological terms, such groups strive for recognition of their authority from
the broader scientific community. In the long-run this recognition is dependent on whether a community's methods
and theories can successfully explain experiments or observations.
In the short- and intermediate-term however, subjective and sociological factors can lead to recognition being
accorded to scientific communities whose paradigms are lacking in merit, or to the needless prolonging of
controversies. Some revisionist historians of science have recently called attention to these sociological aspects of
scientific research (in discussions commonly referred to as 'science wars').
While physical theories are certainly more than arbitrary social conventions, working scientists must admit that there
may be room for improvement of present methods for resolving scientific disputes. The earthquake prediction debate
provides an example of how sociological factors can impede the resolution of a scientific controversy.
Cold fusion: Case Closed
Cold fusion is a case where current methods for resolving controversies worked reasonably well2. Cold fusion
proponents attempted to set up all the trappings of a genuine research field (specialized research institutes,
conferences, journals, funding programs), but once the underlying experiments were shown to be unreliable, the cold
fusion enterprise quickly collapsed.
This episode was basically a success story for science, although relatively large costs were incurred in the evaluation
process before cold fusion was rejected2. One reason the controversy could be efficiently resolved was that much of
the debate was carried out in the open, for example at meetings of scientific societies or in scientific journals.
Consequently the largely positive conclusions reached by cold fusion 'believers' at their own specialized conferences
were not accorded credence by the scientific community as a whole.
Ten years after the first public cold fusion claims, a small band of cold fusion proponents continues to hold out (New
York Times, 23 March 1999). Until fairly recently international cold fusion conferences were still being held3.
Nevertheless, the cold fusion community has clearly failed to convince the scientific community as a whole of the
legitimacy of its claims and methods.
Cold fusion is typical, rather than unique. In all episodes of 'pathological science' there are some credentialed
scientists who hold out indefinitely in support of generally discredited theories4. Debates are resolved when the
mainstream scientific community decides that one side or the other has nothing new to say and treats the discussion
as effectively closed, barring truly new data. Perhaps it is time to consider whether the prediction debate has reached
this point.
Chronic Problems in Geoscience
Geoscience is an observational field and controversies are harder to resolve than in more experimental disciplines.
For example, Wegener's early 20th century evidence for continental drift was widely disregarded because of
objections to the proposed driving mechanism5. It was not until 1967-68 that evidence from paleomagnetism, marine
geophysics and seismology became so clear-cut that the geoscience community generally embraced plate tectonics, of
which continental drift is one consequence.
The dramatic turnabout that led to the acceptance of continental drift has perhaps made geoscientists wary of
resolving other controversies, lest they later be proven wrong. But all such decisions are inherently made on an
interim basis, and controversies can always be reopened if new data are obtained. Allowing controversies such as the
earthquake prediction debate to remain open indefinitely wastes time and energy, thereby slowing scientific progress.
Ironically, the advent of plate tectonics was viewed in the late 1960s and early 1970s as reason for optimism about
earthquake prediction6. This was not wholly unreasonable, as plate tectonics explains why large earthquakes are
concentrated along plate boundaries, and also the direction of earthquake slip. Unfortunately, we now know, as noted
by Jackson that plate tectonics does not allow either short-term or long-term prediction with success beyond random
chance (although some controversy still lingers; see Scholz and Jackson).
Deconstructing the debate
On the surface the central question in Nature's current prediction debate has been how much funding should be
allocated to 'prediction research'. At the extremes Wyss says as much as is now given to research in astrophysics
while I say none, except through the normal peer-review process; the other debaters hold positions between these.
Wyss and I reach diametrically opposite conclusions despite our agreement that there are no immediate prospects for
reliable and accurate prediction. The reason appears to be that Wyss's implicit starting point is that earthquake
prediction is a legitimate scientific research field, and should be funded as such. On the other hand, I argue that
prediction research is in principle a perfectly legitimate research topic within the field of study of the earthquake
source process (although much prediction research is of too low a quality to warrant funding), but that it is not a
legitimate research field in its own right. One hallmark of a research field is the existence of a widely recognised
journal. It is interesting to note that the journal Earthquake Prediction Research ceased publication in 1986 after only
4 volumes.
Resolving the debate
My point of view leads to a number of specific conclusions. One is that discussion of `prediction research' at
scientific meetings should be held together with all other talks on the earthquake source process, rather than off in its
own room, attended only by prediction 'believers'. This might make life unpleasant for everyone in the short run, as it
would force prediction proponents and critics into head-on confrontations, but in the long run such discussions,
although sometimes painful for all concerned, would be invaluable for resolving the prediction controversy. Holding
prediction and earthquake source sessions in the same room at the same time would also encourage the development
of common terminology, and would lead to more rapid dissemination of new research results.
The major international body for seismology is the International Association of Seismology and Physics of the Earth's
Interior (IASPEI). One of the working groups under the IASPEI is the 'Subcommission on Earthquake Prediction'.
This and similar bodies were founded 20 or 30 years ago at a time when there was more optimism about prospects for
prediction than exists at present6. The need for such bodies should be re-examined in light of current knowledge of
the difficulties besetting prediction research. Even if such bodies were not abolished, their terms of reference ought to
be redefined to reflect current scientific knowledge.
I emphasise that I have no intention of criticising the officers or individual members of the IASPEI Subcommission
(although I don't share some of their scientific views). Rather my point is that the very existence of a separate body
for 'prediction research' is an impediment to scientific progress, as it tends to cleave 'prediction research' apart from
work on the seismic source in general.
There are many other prediction organisations whose continued existence might usefully be reviewed. Among these
are the various bodies associated with the earthquake prediction program in Japan (see section 5.3 of ref. 6), and the
US National Earthquake Prediction Evaluation Council, which endorsed the unsuccessful Parkfield prediction (see
section 6 of ref. 6). The European Seismological Commission's Subcommission for Earthquake Prediction Research
is another organisation that might merit review.
Just as war is too important to be left to the military, earthquake prediction should not be left only to prediction
proponents and ignored by the rest of the seismological community. Unfortunately this is a generally accurate, albeit
somewhat oversimplified, description of the present situation.
I feel that if special organisations for earthquake prediction were abolished, thereby forcing the prediction debate into
the open, it would be possible to achieve some resolution relatively soon. However, unless this is done, the
earthquake prediction debate appears doomed to linger in its present form almost indefinitely. Anyone comparing my
articles in this debate to that of Macelwane7 in 1946 will be struck by how little has changed. Let us hope that
seismologists in 2049 will not be making similar comments.
Robert J. Geller
Department of Earth and Planetary Physics, Graduate School of Science, Tokyo University, Bunkyo, Tokyo 1130033, Japan.
email: [email protected]
References
1. Kuhn, T.S., The Structure of Scientific Revolutions. 2nd ed. (University of Chicago Press, Chicago, 1970).
2. Huizenga, J.R. Cold Fusion: The Scientific Fiasco of the Century. (University of Rochester Press, Rochester,
1992).
3. Morrison, D.R.O. Damning verdict on cold fusion. Nature 382, 572 (1996).
4. Langmuir, I. Pathological Science. Physics Today 42(10), 36-48 (1989).
5. Menard, H.W. The Ocean of Truth. (Princeton University Press, Princeton N.J., 1986).
6. Geller, R.J. Earthquake prediction: a critical review. Geophys. J. Int. 131, 425-450 (1997).
7. Macelwane, J.B. Forecasting earthquakes. Bull. Seism. Soc. Am. 36, 1-4 (1946). (reprinted in Geophys. J. Int.
131, 421-422, 1997).
MAX WYSS
In his contribution to this debate of 11th March (week 3), Geller makes a significant error. Contrary to his
statement "increased moment release [more small earthquakes than usual]", increased moment release is
mostly due to large (M7) and medium magnitude (M6) earthquakes, not small ones.1-5
Max Wyss
Geophysical Institute, University of Alaska, Fairbanks, Alaska, USA.
References
1. Varnes, D.J., Predicting earthquakes by analyzing accelerating precursory seismic activity. Pageoph,
130, 661-686 (1989).
2. Sykes, L.R. & Jaume, S.C. Seismic activity on neighboring faults as a long-term precursor to large
earthquakes in the San Francisco Bay area. Nature, 348, 595-599 (1990).
3. Jaume, S.C. & Sykes, L.R. Evolution of moderate seismicity in the San Francisco Bay region, 1850
to 1993: Seismicity changes related to occurrence of large and great earthquakes. J. Geophys. Res.
101, 765-790 (1996).
4. Bufe, C.G., Nishenko, S.P. & Varnes, D.J. Seismicity trends and potential for large earthquakes in
the Alaska-Aleutian region. Pageoph, 142, 83-99 (1994).
5. Sornette, D. & Sammis, C.G. Complex critical components from renormalization group theory of
earthquakes: Implications for earthquake prediction. J. Phys. France 5, 607-619 (1995).
E-MAIL CONTRIBUTIONS
Stress-forecasting: an apparently viable third strategy
STUART CRAMPIN
All discussions so far have referred (perhaps not surprisingly) to the properties of earthquakes, their
times, locations, nucleation mechanisms, physics of the source, possible precursors, etc. I think this will
lead nowhere. Earthquakes are extraordinarily varied and impossible to average. Perhaps the only
feature of earthquakes that can be relied on is that they release a large amount of stress which,
because rock is weak, has necessarily accumulated over a large volume of rock. If this build up of
stress can be monitored then the time and magnitude of the earthquake when fracture criticality is
reached can be subject to 'stress-forecast'. I suggest that we already know how to do this. The effects
have been seen with hindsight for eight earthquakes worldwide, and the time and magnitude of an
M=5 earthquake has been successfully stress-forecasted.
Let me try to introduce a little realism into the debate. Earthquakes are complex. They vary: with magnitude
and direction of stress-field; shape of the fault planes; orientation of fault plane with respect to stress field;
presence or absence of fluids; nature of fluids; fluid-pressure; asperities on fault plane; debris on fault plane;
presence or absence of fault gouge; presence or absence of water channels, pressure seals; height of water
table; temperature; state of Earth tides; state of ocean tides; air pressure; local geology; other earthquakes;
and so on and so on. Each of these phenomena could in certain circumstances have major effects on time,
place, and magnitude of impending earthquakes. Consequently, no two earthquakes are identical (although
seismic records being dominated by the ray path may be very similar).
To understand, model, and accurately predict the behaviour of such a source requires intimate knowledge of
every grain of the fault gouge and every microcrack in the rockmass. This might be possible in theory but in
practice is totally unknowable by tens of orders of magnitude (and similarly beyond the capacity of any
existing or foreseeable computer to model or manipulate again by tens of orders of magnitude). Earthquake
prediction is not just a difficult subject where more knowledge or funding is required, it is out of our reach
by astronomical-sized factors.
This is the reason, why techniques which depend on any feature of the source, or any source-induced
precursors, understanding nucleation processes, etc., are not likely to succeed. There is just far to much
heterogeneity by once more tens of orders of magnitude. It is pretty clear by now that there is no magic
formula, waiting to be discovered as some of the discussions seem to imply. So Bob Geller's first entry in
this debate is correct on the basis of looking at the earthquake source, prediction of the time, place, and
magnitude is practically impossible. There is just far too much possible variety.
Consequently, it is hardly surprising that the classical earthquake prediction of time, magnitude and place of
future earthquakes within narrow limits seems impossible. The claims of success listed by Max Wyss seem
extremely shaky. One may wish for something to turn up, as some in this debate have done, but I suggest that
it is clear from any contemplation of the enormous complexity and variability of the earthquake source that
such hopes are futile and not worth wasting time or spending money on.
Can we do anything? I believe we can, but not by examining the source. Rock is weak to shear stress, which
means that the stress released by earthquakes has to accumulate over enormous volumes of rock. Perhaps
hundreds of millions of cubic kilometres before an M=8 earthquake. There is mounting direct and indirect
evidence1-4 that changes in seismic shear-wave splitting (seismic bi-refringence) can monitor the necessary
build up of stress almost anywhere in the vast stressed rockmass before the earthquake can occur.
Most rocks in the crust contain stress-aligned fluid-saturated grain-boundary cracks and pores1. These are the
most compliant elements of the rockmass and their geometry is modified by the build up of stress2,3,5.
Variations in seismic shear-wave splitting reflect changes of crack geometry, and hence can monitor the
build-up of stress before earthquakes2 and the release of stress at the time of (or in one case shortly before)
the earthquake. Such changes have been identified with hindsight before three earthquakes in USA, one in
China3,5, and now routinely before four earthquakes in SW Iceland6 (Please see these references for further
details of these studies).
The interpretation of these changes in shear-wave splitting is that stress builds up until the rock reaches
fracture criticality when the cracking is so extensive that there are through-going fractures (at the percolation
threshold) and the earthquake occurs2,6. The rate of increase of stress can be estimated by the changes in
shear-wave splitting, and the level of fracture criticality from previous earthquakes. When the increasing
stress reaches fracture criticality the earthquake occurs. Magnitude can be estimated from the inverse of the
rate of stress increase6: for a given rate of stress input, if stress accumulates over a small volume the rate is
faster but the final earthquake smaller, whereas if stress accumulates over a larger volume the rate is slower
but the earthquake larger.
As of 17th March, 1999 one earthquake has been successfully stress forecast in real-time giving the time and
magnitude of a M=5 earthquake in SW Iceland6. Non-specific stress-forecasts were issued to the Icelandic
National Civil Defence Committee on the 27th and 29th October, 1998. The final time-magnitude window (a
window is necessary because of uncertainties in estimates) on 10th November, 1998, was a M>=5 soon or, if
stress continued to increase, a M>=6 before the end of February 1999. Three days later (13th November,
1999), there was a M=5 earthquake within 2 km of the centre of the three stations where changes in shearwave splitting were observed. We claim this is a successful real-time stress-forecast, as anticipated from the
behaviour noted with hindsight elsewhere. Shear-wave splitting does not indicate potential earthquake
locations, but analysis of local seismicity by Ragnar Stefánsson correctly predicted the small fault on which
the stress-forecast earthquake occurred. It appears that monitoring the build up of stress before earthquakes
can forecast the time and magnitude of impending earthquakes.
Three comments about stress-forecasting:
1. Stress-forecasting seems to give reasonable estimates of time and magnitude but gives little or no
information about location, where perhaps Bob Geller's stochasticism takes over. However, as Chris
Scholz says, "earthquake prediction is always local". If it is known that a large earthquake is going to
occur (that is when there has been a stress-forecast), local information may be able to indicate the
fault that will break, as happened in Iceland.
2. Stress-forecasting was possible in SW Iceland only because of the unique seismicity of the onshore
transform-zone of the Mid-Atlantic Ridge where nearly-continuous swarm activity provided
sufficient shear-waves to illuminate the rockmass. Routine stress-forecasting elsewhere, without such
swarm activity, would require controlled-source seismology.
3. The phenomena we are observing are not precursors. Apart from the decrease in stress at the time of
the earthquake, the effects are independent of the earthquake source parameters. Shear-wave splitting
monitors a more fundamental process, the effects of the build up of stress on the rockmass, which
allows the estimation of the rate of increase and the time when fracture criticality is reached.
For reasons not fully understood, but probably to do with the underlying critical nature of the non-linear
fluid-rock interactions4,7 the effect of the stressed fluid-saturated microcracks on shear-waves is remarkably
stable1-3,5.
We see exactly the same behaviour before the 1996 Vatnajökull eruption in Iceland as we see before
earthquakes. About 1 cubic kilometre of magma was injected into the crust over a five month period. The
major difference from an earthquake being that the stress was not abruptly released by the eruption, as it
would have been by an earthquake, following the eruption the stress relaxed over a period of several years as
it was accommodated by a spreading cycle of the Mid-Atlantic Ridge.
I suggest that monitoring the build up of stress is a third strategy for predicting earthquakes beyond the two detecting precursors, and detailed modelling of earthquake physics - suggested by Dave Jackson. Like many
features of shear-wave splitting, it appears to be comparatively stable, appears to have considerable accuracy
in forecasting time and magnitude.
Stuart Crampin
Centre for Reservoir Geoscience, Department of Geology & Geophysics, University of Edinburgh, Grant
Institute, West Mains Road, Edinburgh EH9 3JW, SCOTLAND
email: [email protected].
References
1. Crampin, S. The fracture criticality of crustal rocks. Geophys. J. Int. 118, 428-438 (1994).
2. Crampin, S. & Zatsepin, S. V. Modelling the compliance of crustal rock, II - response to temporal
changes before earthquakes. Geophys. J. Int. 129, 495-50 (1997).
3. Crampin, S. Calculable fluid-rock interactions. J. Geol. Soc. (in the press) (1999).
4. Crampin, S. Shear-wave splitting in a critical crust: the next step. Rev. Inst. Fran. Pet. 53, 749-763
(1998).
5. Crampin, S. Stress-forecasting: a viable alternative to earthquake prediction in a dynamic Earth.
Trans. R. Soc. Edin., Earth Sci. 89, 121-133 (1998).
6. Crampin, S., Volti, T. & Stefansson, R. A successfully stress-forecast earthquake. Geophys. J. Int. (in
the press) (1999).
7. Crampin, S. Going APE: II - The physical manifestation of self-organized criticality. 67th Ann. Int.
Mtg. SEG, Dallas, Expanded Abstracts, 1, 956-959 (1997).
E-MAIL CONTRIBUTIONS
ZHONGLIANG WU
Testing hypothesises is an essential part of earthquake prediction. The 'game rule' associated with this
test is especially important because it leads to the criteria for accepting or rejecting a statistical
hypothesis related to earthquake prediction.
Various studies have been carried out to measure prediction efficiency, to formulate hypothesis tests, and to
verify prediction schemes (for example refs 1-5). Up to now, however, the 'game rules' have not paid enough
attention to an important problem, specifically that earthquakes are different from one another. In the
statistical test, it often happens that all earthquakes within a magnitude-space-time range are treated as the
same, which has no sound geophysical basis.
The reason for making this argument is that earthquakes are different in their source process and tectonic
environment, and can be divided into some classes. A certain precursor will be valid only for a certain class
of earthquakes. For example, if tidal triggering is regarded as a potential earthquake precursor, then caution
must be taken that such a triggering mechanism is only significant for the earthquakes of normal-fault type.
In contrast, for the dip-thrusting and strike-slip earthquakes, such a triggering effect is not significant6.
There are three general cases associated with earthquake prediction: successful predictions, false-alarms, and
failures-to-predict. A 'game rule' is mainly a comparison of the performance of a prediction approach with
that of random prediction, according to the normal rate of seismicity3,4. If earthquakes can be classified into
different categories, then false-alarms and failures-to-predict have a different physical significance. For any
specific precursor, failure-to-predict some earthquakes is inevitable because the precursor under
consideration is not valid for all classes of earthquakes. In the study of potential precusors, therefore, an
appropriate research strategy is to depress the false-alarms and to tolerate the failures-to-predict*.
The classification of earthquakes by the physics of their seismic source is far from complete, and more
detailed studies on the source process of earthquakes and seismogenesis are needed. Furthermore, we do not
know the exact role of slow earthquakes7,8 and aseismic slip9 in the dynamics of earthquakes. In the future,
seismologists may provide an earthquake catalogue classifying of earthquake sources for the statistical test of
earthquake prediction schemes.
Although at present we do not have such a catalogue, it is clear that the assumption that all earthquakes are
the same will lead to a harsh 'game rule' in evaluating the performance of an earthquake prediction scheme.
An extreme example is Geller's claim10 that time and effort need not be wasted on evaluating prediction
schemes that cannot outperform Kagan's 'automatic alarm' strategy11. If earthquakes were all the same, then
this claim would be absolutely reasonable. However, from the perspective of the classification of
earthquakes, such a stance might lead to the loss of some useful information.
Geller et al.12 proposed that the question of precursor test can be addressed using a Bayesian approach where
each failed attempt at prediction lowers the a priori probability for the next attempt. In this case, if all
earthquakes are treated as the same and no difference is made between failures-to-predict and false-alarms,
the 'game rule' will be extremely harsh, and many potential precursors will be rejected by this 'razor'.
From this point of view, it is too early to accept the conclusions that the search for earthquake precusors has
proved fruitless and earthquakes cannot be predicted11,12.
At the other extreme, the ignorance of some proponents of earthquake prediction to the differences between
earthquakes, and the attempts to 'improve' the performance of the proposed precursors (that is to decrease
both the rate of false-alarms and the rate of failures-to-predict) has lead in recent years to too many
declarations of prediction, which in turn lead to too many meaningless false-alarms.
It is interesting that one of the pioneer works in seismology is the classification of earthquakes by R. Hoernes
in 1878. 120 years after, we have almost forgotten this 'classical' problem and treat all earthquakes alike in
our statistical tests. On the other hand, a constructive contribution of the present 'game rule' is that it goes
toward an objective test, which is important in earthquake prediction study. Even if we have a catalogue with
appropriate classification, some of the basic principles will still be valid.
Zhongliang Wu
Institute of Geophysics, China Seismological Bureau,
References
1. Wyss, M. Evaluation of Proposed Earthquake Precusors. (American Geophysical Union,
Washington, D. C., 1991).
2. Wyss, M. Second round of evaluation of proposed earthquake precusors. Pure Appl. Geophys. 149, 316 (1997).
3. Stark, P. B. A few statistical considerations for ascribing statistical significance to earthquake
predictions. Geophys. Res. Lett. 23, 1399-1402 (1996).
4. Stark, P. B. Earthquake prediction: the null hypothesis. Geophys. J. Int. 131, 495-499 (1997).
5. Kagan, Y. Y. Are earthquakes predictable ? Geophys. J. Int. 131, 505-525 (1997).
6. Tsuruoka, H., Ohtake, M. & Sato, H. Statistical test of the tidal triggering of earthquakes:
contribution of the ocean tide loading effect. Geophys. J. Int. 122, 183-194 (1995).
7. Kanamori, H. & Hauksson, E. A slow earthquake in the Santa Maria Basin, California. Bull. Seism.
Soc. Am. 85, 2087-2096 (1992).
8. Kanamori, H. & Kikuchi, M. The 1992 Nicaragua earthquake: a slow tsunami earthquake associated
with subducted sediments. Nature 361, 714-716 (1993).
9. Dragoni, M. & Tallarico, A. Interaction between seismic and aseismic slip along a transcurrent plate
boundary: a model for seismic sequences. Phys. Earth Planet. Interiors 72, 49-57 (1992).
10. Geller, R. J. Earthquake prediction: a critical review. Geophys. J. Int. 131, 425-450 (1997).
11. Kagan, Y. Y. VAN earthquake predictions - an attempt at statistical evaluation. Geophys. Res. Lett.
23, 1315-1318 (1996).
12. Geller, R. J., Jackson, D. D., Kagan, Y. Y. & Mulargia, F. Earthquakes cannot be predicted. Science
275, 1616-1617 (1997).
* Such a research strategy does not conflict with the ethics of seismological study. To tolerate the failures-topredict does not mean that seismologists are irresponsible. Comparing the study of earthquake prediction to
the study of medicine, it is unreasonable to require a technique or instrument to be able to diagnose all
diseases. Similarly, it is not rational to require that an earthquake precursor is valid for all kinds of
earthquakes. A decrease in failures-to-predict can be achieved by discovering new precursors which are valid
for other kinds of earthquake.
CHRISTOPHER SCHOLZ
Geller and Jackson have both reproached me for not citing all of the Jackson and Kagan papers in my
earlier statement. Space requirements did not allow for a fuller discussion it that time.
The 'seismic gap' hypothesis is nothing more than a restatement of Reid's elastic rebound theory. Is it incorrect?
This theory applies only to system-size events, which, rather than being undefined, as suggested by Jackson , is
defined in the case of subduction zones as the seismically coupled down-dip width, which can be determined by
the areal extent of large earthquakes in the region.
The problem is that this is geographically quite variable, ranging from 50 km (M 7.3) to 200 km (M 8.4). So
arbitrarily assuming a constant value of 7.0 (ref. 1) or 7.5 (ref. 2) will always include some events too small to
qualify, this being doubly so because the Gutenberg-Richter relation insures that the catalogue will be
dominated by events near the lower size cut-off. Hence with that procedure one can expect too many events in
'safe' zones, which was the result of refs 1 and 2, although, as expected, there were less discrepancies when the
higher magnitude cut-off was used. This was the flaw I pointed out in my first contribution to these debates.
Thus the elastic rebound theory was not properly tested.
In their more recent study3, they found, in contrast, less events than predicted by Nishenko4. But here the
failure was in a different part of the physics: the assumptions of recurrence times made by Nishenko. These
recurrence times are based on very little data, no theory, and are unquestionably suspect. But this failure needs
to be separated from a failure of the elastic rebound theory, which would lead us to contemplate weird physics.
When conducting such statistical tests, it is important to keep aware of what, in the physics, one is testing.
Christopher H. Scholz
Lamont-Doherty Earth Observatory
Columbia University
Palisades, NY 10964
References
1. Kagan, Y.Y. & Jackson, D.D. Seismic gap hypothesis: ten years after. J. Geophys. Res. 96, 2141921431 (1991).
2. Jackson, D.D. & Kagan, Y.Y. Reply to Nishenko and Sykes. J. Geophys. Res. 98, 9917-9920 (1993).
3. Kagan, Y.Y. & Jackson, D.D. New seismic gap hypothesis: five years after. J. Geophys. Res. 100, 39433960 (1995).
4. Nishenko, S.P. Circum-Pacific seismic potential: 1989-1999. Pure Appl. Geophys. 135, 169-259 (1991).
The need for objective testing
ROBERT J. GELLER
Wyss's letter in week 5 of this debate claims that I made a 'significant error' in my week 3 article. I
explain here why this claim should not be accepted, placing my rebuttal in a more general context.
Testable algorithms required
All readers have undoubtedly studied Newton's law of gravitational attraction:
F = Gm1m2 / r2
It took a great scientist to discover this law, but any competent scientist can use it. Because this law is
quantitative, it can easily be checked against observed data, thus permitting it to be verified or disproved. This
precision is what led to the discovery that it is only approximately correct, as significant discrepancies were
found for cases where relativistic effects are important.
In contrast, prediction proponents provide retrospective 'case studies' that are represented as showing the
existence of some precursory phenomenon before some particular earthquake. Unlike Newton's law, there is no
formula or algorithm that can be objectively tested using other data. This shortcoming is crucial.
Prediction proponents should, but do not, provide a 'predictor in the box' software package, with all parameters
fixed. We could then input any data of the type considered by the method being tested (for example seismicity,
geoelectrical, geodetic, etc.), using either real-time data or recorded data for regions other than that used to
develop the algorithm. The software package would then generate predictions that could be tested against an
intelligent null hypothesis such as the automatic alarm strategy1.
Wyss's criticism rebutted
In his week 1 article, Wyss said that "some [earthquakes] are preceded by increased moment release during the
years before them, and some are preceded by seismic quiescence."
In my week 3 article I commented as follows. "Wyss's contribution to this debate cites both 'increased moment
release' (more small earthquakes than usual) and 'seismic quiescence' (fewer small earthquakes than usual) as
precursors. Thus it appears that any variability whatsoever in background seismicity can be claimed, after the
fact, to have been a precursor. To determine whether these variations in seismicity levels are random
fluctuations or real physical phenomena, objective testing of unambiguously stated hypotheses is required."
Wyss claims that this is "a significant error", because "increased moment release is mostly due to large (M7)
and medium magnitude (M6) earthquakes, not small ones." Here he has defined "large", "medium," and "small"
in a specific way. The "significant error", if any, that I made was perhaps to be insufficiently precise in defining
the size of earthquakes. Similar criticisms could be levelled at most of the articles in this debate, which are
written in a quasi-journalistic style to make them easily accessible to readers.
Needed: more statistics, less rhetoric
Individual case studies, rather than objective tests of a hypothesis, are generally presented in support of the
'increased moment release' model. One recent study2 attempts to make progress towards objective testing, but
this study's null hypothesis does not appear realistic (see ref. 3), and the statistical testing does not appear to
account for the effect of retroactively adjustable parameters (see ref. 4).
Another recent study5, which appears to have been carefully conducted, found that a random (Poisson) model
outperformed the increased moment release model (sometimes also known as 'time-to-failure' analysis) by a
factor of four for the particular dataset that was analyzed. (Note that Wyss cited ref. 2 in his Week 1 article, but
did not cite ref. 5 in any of his three articles (week 1, 3, 5) to date.)
In my week 3 article I said that many prediction proponents appeared to have fallen into 'The Gambler's
Fallacy' of drawing conclusions from a retrospectively chosen unrepresentative subset of a much larger dataset.
The hypotheses of 'seismic quiescence' and 'increased moment release' both appear to be examples of this
pitfall. If their proponents disagree with me, they should attempt to refute me, not by quibbling over the
definitions of 'small', 'medium', and 'large', but rather by objective statistical testing.
Robert J. Geller
Department of Earth and Planetary Physics,
Graduate School of Science,
Tokyo University,
Bunkyo, Tokyo 113-0033,
Japan.
[email protected]
References
1. Kagan, Y. VAN earthquake predictions: an attempt at statistical evaluation. Geophys. Res. Lett. 23,
1315-1318 (1996).
2. Bowman, D.D., Ouillon, G., Sammis, C.G, Sornette, A. & Sornette, D. An observational test of the
critical earthquake concept. J. Geophys. Res. 103, 24,359-24,372 (1998).
3. Stark, P.B. Earthquake prediction: the null hypothesis. Geophys. J. Int. 131, 495-499 (1997).
4. Mulargia, F. Retrospective validation of the time association of precursors. Geophys. J. Int. 131, 500504 (1997).
5. Gross, S. & Rundle, J. A systematic test of time-to-failure analysis. Geophys. J. Int. 133, 57-64 (1998).
More agreement than division
MAX WYSS
As we all agree that we know little about how earthquakes initiate and how to predict them, it follows
that we will study the problem and eventually reach a relatively satisfactory solution. The question is,
will we do it with the significant financial support and expedience this humanitarian effort deserves, or
will we force individual scientists to do it in their non-existent spare time?
Definition of earthquake prediction
The attempt to define earthquake prediction in such a narrow way (a time resolution of a few days) is failing,
thus forcing it to be declared as generally impossible. This is a red herring. If I'm not mistaken, nobody in the
debate has disagreed with the point, made by many, that there are significant benefits derived from
intermediate- and long-term predictions, even if they are made with relatively low probabilities. In which
case, let us see if we cannot make progress in formulating rigorous, yet consumer friendly statements
describing the time dependent earthquake hazard in some locations. That is, predict earthquakes.
Quality of research
There are two types of influences that degrade the quality of research into earthquake prediction. Firstly,
there are two emotional factors. Limelight seekers are attracted to this field and others are electrified into
hasty work by the idea of coming to the rescue of the populace. But there is a second problem; lack of
financial support. A researcher who explores a hypothesis after regular working hours exclusively, is likely
to present the 'best' results only, instead of exploring all the possible, but marginal data sets that may exist.
Thus, our weapons in the struggle for high quality work are threefold:
1. Funding the research at an adequate level such that the most capable scientists are attracted to this
field; such that a researcher has the time to penetrate to the greatest depth allowed by a data set; and
such that the necessary high quality data are available.
2. Rigorous peer review of research proposals.
3. Stringent reviews of journal articles.
We should use all of these tools to foster high quality earthquake prediction research.
Case histories are often all we have
Very large earthquakes occur too infrequently to test hypotheses on how to predict them with the statistical
rigor one would like (for example L. Knopoff's contribution to this debate ), and potential data sets for
testing are further reduced by the need to separate different tectonic settings (see Z. Wu's contribution to this
debate ). In addition, most earthquakes occur far from existing dense instrumentation networks, making it
impossible to gather data pertinent to most hypotheses.
Thus sets of case histories in which the individual cases may number about a dozen will be all we will have
for years to come, whether we like it or not. However, I do not see this as a reason to give up prediction
research or rupture initiation studies altogether, as long as we have not exhausted the data available. As it is,
we have hardly scratched the surface, because of lack of funding.
Here we go again.
"extensive prediction efforts in several countries in several eras have all failed" (Geller, this debate).
Such exaggerations know no bounds. What "several eras" has the fledgling discipline of seismology seen?
I'm utterly unimpressed by the quotes Geller used to support his assertion in week four of this debate,
because these quotes stem from the years 1937 and 1945. No wonder satisfactory results concerning the
problem of earthquake prediction were not achieved, although this problem was "attacked with every
resource at their command," since they had essentially no resources, had not even classified earthquakes as to
type and size and had not yet understood the reason for earthquake ruptures on the planet.
The most basic tool of seismologists is the seismograph network. Rudimentary networks first came into
existence in a few locations in the 1930s. A world wide network was installed in the mid 1960s, and anyone
who wishes to analyze high resolution earthquake catalogues produced by dense networks cannot start their
data set before the 1980s, because up to that time the data were so poor. Thus the researchers around 1940,
whom Geller quotes, had hardly an opportunity to catch a glimpse of the distribution of earthquakes in space,
time and as a function of size. They were in no position to conduct serious prediction research. They did not
have even the most basic, let alone sophisticated tools.
In addition, the reason for fault ruptures (earthquakes) on this planet was only discovered in the late 1960s.
Only then did it became clear that the planet cools by moving heat, generated in its interior due to radioactive
decay, by convection to the surface, where brittle plates are pushed past one another, generating earthquakes.
Preoccupied with consolidating this discovery for about a decade, seismologists spent no time on prediction
research and plans drawn up for such a program remained largely unimplemented (see A. Michael in this
debate ).
It is clear that in the US there was never a serious research program for earthquake prediction. There did
exist a thorough seismology program to detect and discriminate nuclear explosions in the USSR and China,
which was very successful, because it attracted the best workers in the field, since it was well funded.
Separation of sub-disciplines
Geller seems preoccupied by separation of sub-disciplines. Most researchers and educators try to combat the
barriers that constantly appear between sub-disciplines but it is difficult to keep the channels of
communication open. Of course the problem of earthquake prediction is intimately intertwined with those of
the seismic source processes, of tectonics and self organized criticality. In addition, laboratory experiments
on rock fracture, computer simulation of faulting, crustal deformation measurements and modelling, as well
as studies of ground water properties are all important for, and applicable to, problems in prediction research.
It would be beneficial, if, as Geller suggests, an audience of such wide expertise were in one room at a
conference. For one thing Geller himself would then not make such elementary mistakes as to confuse
increased seismic moment release with "more small earthquakes." However, as we all know, wide
participation in specialist lectures at conferences is unrealistic. People cannot be forced to attend. To foster
interactions between sub-disciplines, one must make the special effort of interdisciplinary retreat-meetings.
Thus, I disagree with Geller when he sees a need for organizational changes. The boundaries of subdisciplines establish themselves and there is no particular evil associated with them. However, I agree that
more frequent retreat-meetings with attendance by experts from a wide range of fields is needed to advance
earthquake prediction research.
Conclusion
I conclude that the criticism of earthquake prediction research has some worthy targets: the low quality work
and the exaggerated claims that exist in this field. I hope we can reduce the originators of these problems to a
small minority (they will never completely disappear). However, when the criticism takes on the character of
a crusade, which tries to outlaw earthquake prediction research, many of us grow a bit tired of the "debate".
Max Wyss
Geophysical Institute, University of Alaska, Fairbanks, Alaska, USA.
E-MAIL CONTRIBUTIONS
Although this debate is now closed, the following contribution makes an interesting counterpoint to
Ian Main's concluding remarks and so was held over for posting in this final week.
DIDIER SORNETTE
Predicting earthquakes requires an understanding of the underlying physics, which calls for novel
multidisciplinary approaches at a level never yet undertaken. Notwithstanding past efforts in several
countries in the last decades, I fail to see that the scientific community has used the full potential of
artificial/computational intelligence, statistical physics, super-computer modelling, large scale
monitoring of a full spectrum of physical measurements, coupled together with more traditional
seismological and geological approaches to make a dent in the earthquake problem. What we have
learned is that past failures in earthquake prediction reflect the biased view that it was a simple
problem.
The alchemy of earthquakes
Paradoxes, misunderstanding, controversies often appear when restricted to the 'narrow' window of our
present knowledge. Consider the example regarding the importance that Sir Isaac Newton attributed to
alchemy as his primary research field, leading to the provoking statement by Geller in the first week of this
debate that ''Earthquake prediction seems to be the alchemy of our times''.
The lesson I personally take from this example is that Newton was fundamentally right to expect that
physical processes could lead to the transmutation of one element into another. However, physics and
technology were not at his time sufficiently advanced and science had to wait for Becquerel and for the
Curie's to open the modern 'alchemy' (nuclear science) era. The question then boils down to the fact that
Newton lost his time pursuing a (valid) goal which was, however, out of his reach.
Similarly, we need fundamentally new approaches for understanding what are earthquakes, but hopefully
less time might be needed to understand what is the 'alchemy of earthquakes', simply because we are so
much better armed and science is progressing so much faster than ever before. I consider the understanding
of earthquakes to be a requisite to the assessment of prediction potentials for two reasons. Simple 'black box'
pattern recognition techniques have been tried repeatedly and have shown limited success, probably in part
due to the poor quality and scarcity of the data. A fundamental understanding of earthquakes, not only of the
source problem but of the full seismic cycles, is thus called for.
Only such an understanding could lead us to a quantitative assessment of the potentials and limitations of
earthquake prediction, as chaos and dynamical system theory have helped in understanding (some of) the
limits of weather forecasting. We are very far behind meteorology for two reasons:
1. we still have very limited precise quantitative measurements of the many parameters involved.
2. the physical phenomena underlying earthquakes are much more intricate and interwoven and we do
not have a fundamental Navier-Stokes equation for the crust organization.
It is thus too early to state anything conclusive about the fundamental limitation of earthquake prediction.
Mechano-chemistry
Earthquakes are indeed very poorly understood. The standard theory is based on the rebound theory of
earthquakes formulated by Reid in 1910 which was later elaborated as a friction phenomenon by Brace and
Byerlee in 1966 with many recent developments using Ruina-Dieterich-type laws. This textbook picture still
poses many fundamental paradoxes, such as the strain paradox1, the stress paradox2, the heat flow paradox3
and so on4. Resolutions of these paradoxes usually call for additional assumptions on the nature of the
rupture process (such as novel modes of deformations and ruptures) prior to and/or during an earthquake, on
the nature of the fault and on the effect of trapped fluids within the crust at seismogenic depths (see ref. 4
and references therein). There is no unifying understanding of these paradoxes.
As recalled by Crampin in this debate, earthquakes depend on many geological and physical conditions. In
particular, there is a lot of direct and indirect evidence for the prominent role of water, both mechanically
(pore pressure) and chemically (recrystallization, particle effects, texture) and their probable interplay4,5.
There is growing recognition that mineral structures can form and deform at much milder pressures and
temperatures than their pure equilibrium phase diagram would suggest, when in contact with water or in the
presence of anisotropic strain and stress (ref. 5 and references therein).
As an example, I have recently proposed5 that water in the presence of finite localized strain within fault
gouges may lead to the modification of mineral textures, involving dynamic recrystallization and maybe
phase transformations of stable minerals into metastable polymorphs of higher free energy density. The
interplay between mechanical deformation, activated chemical transformation and rupture opens new
windows to look at earthquakes, beyond the (reductionist) mechanical paradigm.
Self-Organized Criticality
As mentioned by Bak in this debate, the SOC hypothesis has been suggested, on the one hand, on the basis of
the observation of power law distributions, such as the Gutenberg-Richter law for earthquakes and the fault
length distribution, and of the fractal geometry of sets of earthquake epicenters and of fault patterns, and on
the other hand on the study of highly simplified models with somewhat similar scale-invariant properties.
The most interesting aspect of SOC is its prediction that the stress field should exhibit long-range spatial
correlations as well as important amplitude fluctuations. The exact solution of simple SOC models6 has
shown that the spatial correlation of the stress-stress fluctuations around the average stress is long range and
decays as a power law with distance.
Such models suggest that the stress fluctuations not only reflect but also constitute an active and essential
component of the organizing principle leading to SOC. It is an intringuing possibility whether the observed
increase of long-range intermediate-magnitude earthquake activity prior to a strong earthquake7,8 may be a
signature of increasing long-range correlations. This theoretical framework supports the view developed by
Crampin in this debate that stress monitoring on large scale may be a good strategy.
Two important consequences can be drawn from the SOC hypothesis. First, at any time, a (small) fraction of
the crust is close to the rupture instability. Together with the localization of seismicity on faults, this leads to
the conclusion that a fraction of the crust is susceptible to rupture, while presently being quiescient. The
quantitative determination of the susceptible fraction is dependent on the specificity of the model and cannot
thus be ascertained with precision for the crust. What is important however in that the susceptible part of the
crust can be activated with relatively small perturbations or by modification of the overall driving conditions.
This remark leads to a natural interpretation of triggered9 and induced seismicity by human activity in the
SOC framework10.
The second important but often ignored point is that, in the SOC picture, the crust is NOT almost everywhere
on the verge of rupture and is not maintaining itself perpetually near the critical point. For instance,
numerical simulations show that in discrete models made of interacting blocks carrying a continuous scalar
stress variable, the average stress is about two thirds of the stress threshold for rupture. In these models, the
crust is, on average, far from rupture. However, it exhibits strong fluctuations such that a subset of space is
very close to rupture at any time.
The average is thus a poor representation of the large variability of the stress amplitudes in the crust. This
leads to the prediction that not all perturbations will lead to triggered or induced seismicity and that some
regions will be very stable. SOC models suggest that local stress measurements may not be representative of
the global organization.
Criticality and predictability
In the present context, criticality and self-organized criticality, used in the sense of statistical physics, refer to
two very different concepts, which leads to a lot of confusions, as seen in this debate. First, SOC is selforganized (thus there is no apparent 'tuning', see however ref. 11) while criticality is not. Second, the
hallmarks of criticality are the existence of specific precursory patterns (increasing 'susceptibility' and
correlation length) in space and time.
The idea that a large earthquake could be a critical phenomenon has been put forward by different groups,
starting almost two decades ago12-14. Attempts to link earthquakes and critical phenomena find support in the
demonstration that rupture in heterogeneous media is a critical phenomenon. Also indicative is the often
reported observation of increased intermediate magnitude seismicity before large events (see Bowman and
Samis's contribution to this debate and references therein).
Criticality carries with it the concepts of coarse-graining and universality, and suggests a robustness of its
signatures when observed at sufficiently large scale. This is in contrast with the conclusion that one needs a
detailed knowledge of the huge complexity of the geology and mechanics of fault systems (fault geometry,
strength variations in the fault, zone material, rheological properties, state of stress, etc) to perform a
prediction (see Crampin's contribution to this debate).
Criticality and SOC can coexist.
If rupture of a laboratory sample is the well-defined conclusion of the loading history, the same cannot be
said for the crust where 'there is life' after large earthquakes. An illustration of the coexistence of criticality
and of SOC is found in a simple sandpile model of earthquakes on a hierarchical fault structure15. Here, the
important ingredient is to take into account both the nonlinear dynamics and the complex geometry.
While the system self-organizes at large time scales according to the expected statistical characteristics, such
a the Gutenberg-Richter law for earthquake magnitude frequency, most of the large earthquakes have
precursors occuring over time scales of decades and over distances of hundreds of kilometers. Within the
critical view point, these intermediate earthquakes are both 'witnesses' and 'actors' of the building-up of
correlations. These precursors produce an energy release, which when measured as a time-to-failure process,
is quite consistent with an accelerating power law behaviour. In addition, the statistical average (over many
large earthquakes) of the correlation length, measured as the maximum size of the precursors, also increases
as a power law of the time to the large earthquake.
From the point of view of self-organized criticality, this is surprising news: large earthquakes do not lose
their 'identity'. In this model15, a large earthquake is different from a small one, a very different story than
told by common SOC wisdom in which 'any precursor state of a large event is essentially identical to a
precursor state of a small event and earthquake does not know how large it will become', as stated by Scholz
and Bak in this debate.
The difference comes from the absence of geometry in standard SOC models. Reintroducing geometry is
essential. In models with hierarchical fault structures15, we find a degree of predictability of large events.
Most of the large earthquakes whose typical recurrence time is of the order of a century or so can be
predicted from about four years in advance with a precision better than a year.
An important ingredient is the existence of logperiodic corrections to the power law increase of the seismic
activity prior to large events, reflecting the hierarchical geometry, which help 'synchronizing' a better fit to
the data. The associated discrete scale invariance and complex exponents are expected to occur in such outof-equilibrium hierarchical systems with threshold dynamics16.
Of course, extreme caution should be exercized but the theory is beautiful in its self-consistency and, even if
probably largely inacurate, it may provide a useful guideline. Hierarchical geometry need not be introduced
by hand as it emerges spontaneously from the self-consistent organization of the fault-earthquake process17.
Didier Sornette
Institute of Geophysics and Planetary Physics and Department of Earth and Space Sciences
UCLA, Box 951567, 2810 Geology Bl, 595 E Circle Drive, Los Angeles, CA 90095-1567
Director of Research
National Center for Scientific Research
LPMC, CNRS UMR6622 and Universite de Nice-Sophia Antipolis, B.P. 71
06108 NICE Cedex 2, France
References
1. Jackson, D.D. et al. Southern California deformation. Science 277, 1621-1622 (1997).
2. Zoback, M.L. et al. New evidence on the state of stress of the San Andreas fault zone. Science 238,
1105-1111 (1987).
3. Lachenbruch, A.H., & Sass, J.H. Heat flow and energetics of the San Andreas fault zone. J. Geophys.
Res. 85, 6185-6222 (1980).
4. Sornette, D. Earthquakes: from chemical alteration to mechanical rupture. Phys Rep. In the press
(1999). (http://xxx.lanl.gov/abs/cond-mat/9807305)
5. Sornette D. Mechanochemistry: an hypothesis for shallow earthquakes. in Earthquake
thermodynamics and phase transformations in the earth's interior. (Teisseyre, R. & Majewski E. eds;
Cambridge University Press, 1999). (http://xxx.lanl.gov/abs/cond-mat/9807400)
6. Dhar, D., Self-organized critical state of sandpile automaton models, Phys. Rev. Lett. 64, 1613-1616
(1990).
7. Knopoff, L. et al. Increased long-range intermediate-magnitude earthquake activity prior to strong
earthquakes in California. J. Geophys. Res. 101, 5779-5796 (1996).
8. Bowman, D.D. et al. An Observational test of the critical earthquake concept. J. Geophys. Res. 103,
24359-24372 (1998).
9. King, G.C.P., Stein, R.S., & Lin, J. Static stress changes and the triggering of earthquakes. Bull.
Seism. Soc. Am. 84, 935-953 (1994).
10. Grasso, J.R. & Sornette, D. Testing self-organized criticality by induced seismicity. J. Geophys. Res.
103, 29965-29987 (1998).
11. Sornette, D., Johansen A. & Dornic, I. Mapping self-organized criticality onto criticality. J. Phys. I.
France 5, 325-335 (1995).
12. Allegre, C.J., Le Mouel, J.L. & Provost, A. Scaling rules in rock fracture and possible implications
for earthquake predictions. Nature 297, 47-49 (1982).
13. Keilis-Borok, V. The lithosphere of the Earth as a large nonlinear system. Geophys. Monogr. Ser. 60,
81-84 (1990).
14. Sornette, A. & Sornette, D. Earthquake rupture as a critical point: Consequences for telluric
precursors. Tectonophysics 179, 327-334 (1990).
15. Huang, Y. et al. Precursors, aftershocks, criticality and self-organized criticality. Europhys. Letts 41,
43-48 (1998).
16. Sornette, D. Discrete scale invariance and complex dimensions. Phys Reps 297, 239-270 (1998).
17. Sornette, D., Miltenberger, P. & Vanneste, C. Statistical physics of fault patterns self-organized by
repeated earthquakes. Pure Appl. Geophys 142, 491-527 (1994).
This debate is now closed. Please return to this site soon for new subjects, or browse the previous debates on
this site.