by Levent Sevgi
Electronics and Communication Department, Dogus University - Istanbul
Having read the article 'Earthquake Alarm' by Tom Bleier and Friedemann Freund in the December 2005 issue of Spectrum (pages 16-21), I am concerned about the way in which this topic is presented—in terms of public awareness, interest and expectations.
The problem lies in presenting artificial optimism in relation to short-term earthquake prediction (EP) based on precursors from seismic investigations to ELF/ULF up to HF electric-magnetic fields of the Earth, from ionospheric electron content disturbances to infrared signatures of the mechanical stresses, etc, as if these studies are mature enough and already scientifically proven, so that we are only a matter of a few years (at most a decade) away from building earthquake early warning systems.
This impression is extremely dangerous in countries with poor scientific literacy where municipalities fearlessly extend city plans right through the faults, people construct weak buildings, and so on. It should be noted that it is not earthquakes themselves which kill people; it is the collapse of man-made structures which does most of the damage, therefore the best way of preparing the society for strong and devastating earthquakes and to mitigate their worst effects is to develop better urban land use plans and construct stronger buildings.
If highly artificial optimism is imposed via respectful journals that earthquake early warning systems will be ready in a few years, people may simply rely on 'experts' to protect them by predicting the time of an earthquake, when they should be taking measures to improve building safety.
In general, EP methods could be divided into two; statistical methods for seismicity and observations of precursors to large earthquakes. The necessary and sufficient conditions of a precursory-based EP are to observe and discriminate the quantity, to show the causal correlation, and finally to build a model. It is not scientific to try to build up an earthquake early warning system—such as a network of hundreds of sensors (magnetometers, electroscopes, etc)—instead of showing the causal correlation and understanding the physical phenomena in details first. EP methods are still far from maturity and are controversial too: see R.J. Geller, 'Earthquake prediction: A critical review', Geophys. J. Int. 131 pp.425-450, 1997, for an excellent review.
All that can be done currently is to try to show that recorded anomalies are related to the earthquakes occurred, and this may only become apparent some hours or several days (even months) after the earthquakes occur: a case of retrospective correlation. What's missing in these studies is Karl Popper's falsification principle of systematic rule out of the known natural and artificial sources of signals from the precursors of the earthquakes. Those who continuously record ionospheric noise, electron content, virtual layer heights, magnetic field changes of the Earth's surface, static and/or quasi-static electric / magnetic fields, electromagnetic and infrared radiations, etc, agree on one thing: there is some relation between the anomaly observed and earthquakes but the mechanism and parameters of this relation are yet unknown.
So all predictions are vague at best, as mentioned by the authors of the Spectrum article. Long-term projections of an earthquake in a certain area with a high probability within some decades is possible by studying historical earthquake records, monitoring the motion of the Earth's crust by satellite, and measuring with strain monitors below the Earth's surface. This is important for the policy makers. But short-term EP involves stating precisely where (hypocenter latitude and longitude), when, at what depth, how strong, and with what probability an earthquake will occur within the stated error/uncertainty bounds.
Experts who use statistical models prefer to talk about 'earthquake forecasting', and only use the term 'earthquake prediction' for specific instances where a forecast has a temporarily and exceptionally high probability and imminence. More importantly, they consider their studies and statistical test results as a tiny step towards physical understanding of earthquakes and occurrence.
Earthquake prediction efforts date back to the late 19th century and have attracted attention in highly prestigious journals. Although highly optimistic reports have been presented from time to time, none has withstood detailed scientific examination.
The February 1999 Nature debate (www.nature.com/nature/debates/earthquake) includes many papers discussing the possible signals of different phenomena including seismic, electrical, electromagnetic and luminosity that either accompany or are followed by earthquakes. Although views on the topic are quite different, it is generally accepted that all of the EP studies based on variety of precursors are quite low-quality and pseudo-scientific works, and that exaggerated claims tend to be made by scientifically unqualified publicity seekers.
Moreover, all of the debate contributors agree that the deterministic prediction of an individual earthquake, within sufficiently narrow limits to allow a planned evacuation programme, is an unrealistic goal.
The International Association of Seismology and Physics of the Earth's Interior (IASPEI) outlined guidelines for precursor-based EP. According to these guidelines, an observed anomaly should have a relation to stress, strain or some mechanism leading to earthquakes, and should be simultaneously observed on more than one instrument, or at more than one site, and should bear an amplitude-distance. There should be a persuasive demonstration that the calibration of the instrument is known, and that the instrument is measuring a tectonic signal. Anomaly definitions should be precisely stated so that any other suitable data can be evaluated for such an anomaly.
The different between anomalous and normal values shall be expressed quantitatively, with an explicit discussion of noise sources and signal-to-noise ratio. The rules and reasons for associating a given anomaly with a given earthquake shall be stated precisely. The probability of the 'predicted' earthquake to occur by chance and to match up with the precursory anomaly shall be evaluated. The frequency of false alarms (similar anomalies not followed by an earthquake) and surprises (similar size mainshocks not preceded by an anomaly) should also be discussed.
There may be variety of earthquake precursors from acoustic, electromagnetic signals to infrared emissions on the ground as well as in the ionosphere in a broad frequency range from mHz up to MHz. On the other hand, any phenomena that occurred before an earthquake can be called precursors whether or not they have a causal relation to the earthquake, therefore observations of these signals and studies for their correlation with the earthquakes are certainly worthwhile if issued scientifically. Efforts towards the data gathering of earthquakes occurring with and without preceded precursors are extremely important. But jumping directly a conclusion from these very early stage studies that accurate earthquake early warning is within reach within a decade is not scientific.
In conclusion, the scientific goal should be the understanding of fundamental physics of earthquakes and physics-based theory of the precursors (their causal correlation), not the reliable prediction of individual earthquakes. In view of the lack of proven forecasting/prediction methods, everybody should exercise caution in issuing public earthquake warnings. Scientifically low-quality works should be eliminated from scientific journals, works that contain errors and absurd statements made by scientifically unqualified publicity seekers should be exposed.
|