Selective processing and inversion of long offset data Hassan

Selective processing and inversion of long offset data
Hassan Masoomzadeh
Bullard Labs, Madingley Road, Cambridge, CB3 OEZ, UK
Supervisors: Penny Barton, Satish Singh
December 2002
1 Introduction
"During the past five years many new ideas and techniques have emerged for tackling
the challenge of creating a good seismic image of structure beneath high impedance
layers such as basalt. The modelling and inversion of long offset and low frequency
data have proven particularly promising, as have various processing-based
approaches but the fix-all solution has yet to emerge. ...
Many questions remain. Are we getting sufficient energy to and from our targets to be
able to image anything? How can we make the best use of energy we do receive?
What can we learn from inversion and modelling? It has been known for some time
that only low frequencies will transmit successfully through the basalt, and that the
use of longer offset data captures the refracted wavefield, post-critical reflections and
converted waves, all of which travel into and beneath the basalt and can potentially
be used to image the sub-basalt structure." (Adapted from Sub-basalt Imaging
Conference Journal, April 2002, Cambridge)
1.1 Sub-basalt imaging problem
The interest of geoscientists to see below basaltic opaque layers has increased in
recent years when hydrocarbon exploration extended to deep water frontier regions.
Sub-basalt imaging has been of increasing interest for geoscientists over recent years
specially in areas where sedimentary layers are buried beneath the high velocity and
high reflectivity basalt flows and sills. Basalts are highly heterogeneous, and have
various geometrical characteristics and physical properties. So it is very difficult for
seismic wave to see through the basalt using conventional acquisition and processing
methods. One approach to tackle the problem is the extension of the recording
distance to wide-angle or long-offset range. Theoretically, long-offset seismic data
carries valuable acoustic information from deep interfaces such as sub-basalt and subsalt reflectors. At very far offsets multiples are less problematic, and also an increase
in reflectivity can be expected for wide angle reflections. However, the absorption of
high frequencies, the high reflectivity of top and base basalt interfaces, ray path
mutuality in the vicinity of basalt intrusions, the generations of strong multiples and
the ambience noise are among the factors which suppress the weak signal reflected
from very deep interfaces. Hence, researchers are trying to image sub-basalt layers
through the implementation of a variety of seismic and non-seismic methods. Despite
the noticeable improvements in sub-basalt imaging techniques, it still remains difficult
to provide a subtle picture of the sought-after reflectors.
The recent developments of ultra-long streamers and the use of two ship geometry
enables the recording of densely sampled data sets with a simulated streamer length of
several tens of kilometers. Several long offset data sets are recently collected to
address sub-basalt imaging but unfortunately the longer offsets are not fully exploited
in stacks by conventional methods. Apparently, the traditional processing methods are
not suitable for very long offset data, specially where the targets are very deep and
hidden beneath the basalt sills. There were at least two major problems in basic
processing approach. Firstly, severe multiples had affected most of the primary events
at near offsets. Secondly, the huge amount of stretch caused by normal moveout
correction was preventing the contribution of the long offsets to the final stack. These
two factors were enough to prevent presence of the basement interface in the final
stack section.
1.2 About the data set
The first data set that I am using in my research is from The Rockall through. It is
collected in streamers carried by two ship, providing an offset range from 200m to 30
km. The trace length is about 18 seconds, the water depth varies from 300 m to 1800
m, receiver interval is 25 m and shot interval is 100 m. So the original CDP interval is
12.5 m and the fold of coverage is about 150. Previous attempts to image the
basement interface of the Rockall trough were not very successful in fully exploiting
reflected energy from sub-basalt interfaces. Then it was aimed to combine both
reflection and refraction information in a tomographic inversion basis in order to
locate the basement interface and to extract the velocity model as well (Miguel Bosch,
2001). Full waveform inversion also has been tried recently on this data set, but due to
the weak mode converted signal it was not as successful as it was expected (Yan
Freudenreich, 2002). I started my PhD course on January 2002, with an attempt to reprocess this dataset. What I have done during my first year of study, is an attempt to
apply some special processing approaches in order to exploit the long offset data as
much as possible. I am trying to tackle the short comings of conventional processing,
specially moveout stretch, to allow the inclusion of long offset reflection arrivals in
the stack image.
2 Pre-processing
In order to simplify the data handling, every four adjacent CDP gathers were merged.
So the new CDP interval was 50 m and there were 600 traces contributed in each
combined gather. Since high frequencies were not very much interested, the data resampled to a new sampling rate of 16 ms. Like any other marine data sets, there were
several kinds of multiples such as source and receiver ghosts, interbedded multiples,
and water bottom multiples. I compared several deconvolution and demultiple
methods and strategies, and examined the results both in prestack gathers and in post
stack sections and finally applied a strategic pre-processing work flow as described
below.
2.1 Deconvolution
After doing some tests in time and frequency domains, I finally decided to apply a
kind of cascaded deconvolution which is in fact some thing between the trace-by-trace
deconvolution and the surface consistent deconvolution. It consists of two passes of
ensemble deconvolution, first one in common shot gathers and the second one in
common offset gathers (renewing the operator every 200 meters in offset and every 30
CDP's). The result was better than that of other alternative strategies. That could be
because, in very far offsets, the quality of a single trace is not good enough for
statistical operator calculation, but if we gather an ensemble of traces, provided we
apply a suitable designing window, we may obtain more reliable results.
Surface consistent deconvolution, which can be simulated by multiple passes of
ensemble deconvolution in four different common gathers (source, receiver, offset and
CDP), on the other hand, is more reliable because of its more statistical basis but it is
not easily applicable to marine data where unlike the land, the receivers are not related
to a certain location. In the way described above, I am applying the source-location
related compensations in the first pass, and a hybrid compensation of the other effects
in the second pass. i.e. (offset/receiver/channel) and (CDP/structure/noise).
2.2 Radon filter
For longer period multiples, I applied a Radon filter in a special way. After normal
moveout correction, primaries are corrected to almost flat events and multiples are
under corrected to parabolic ones. Accordingly, the most common style of Radon
demultiple is the application of normal moveout correction first, muting primaries in
the parabolic Radon domain, subtraction of the modeled multiples from the original
data afterwards and removing the moveout correction at the end. But due to the huge
moveout stretch, this method does not work properly for very long offset data. So I
decided to perform a filtering process without any moveout correction.
Uncorrected events which are almost hyperbolic, are supposed to appear as a point in
hyperbolic Radon domain. The problem was that the common commercial software
do not provide a precise hyperbolic Radon transform. What they do is actually the
parabolic approximation of hyperbolic transform. It might be suitable for near offset
data but not necessarily for very long offset data. The result of converting hyperbolic
events using the parabolic transform is a smeared curve for each event while it is
supposed to be more like a point. Since the software which I use for data processing
(ProMAX) is not easily accessible for algorithm modification, I changed the problem
into one of transferring hyperbolic events to parabolic ones and then the application of
the parabolic transform. I did this by altering the time axis to time squared. For this
purpose I used the time to depth conversion module identifying the conversion
velocity function as the same value of the time. In this way I obtained a very focused
image of events in Radon domain, easily distinguishable even in terms of very close
interbedded multiples.
I applied the Radon filter to the temporary combined local super CDP gathers. This
was to improve the quality of filtering by reducing the disadvantages of spatial
aliasing during multiple subtraction.
2.3 Offset optimization
Offset optimization can be defined as giving a higher sharing weight to those offsets
which represent a higher signal to noise ratio in vicinity of the target. It can be done by
functional weighting or simply via top and bottom mutes. Even after precise Radon
filtering as described above, some severe multiples plus some noise introduced by the
filter were still remaining at near offsets. One simple solution was to cut a small
portion of the data via an internal mute. When the target appears mainly in very far
offsets, and the fold of coverage is high enough, a massive inner mute is solely
capable of reducing the multiples to a large extent.
3 Stacking without stretch
The second main problem which occurs in very far offsets is the huge stretch
occurring in a traditional processing approach. The common solution to this problem,
is to automatically mute those parts of traces which are stretched more than some
specific percentages (e.g. 40%). But when we deal with very long offset data, this
means we lose the most important part of the valuable information from very deep
layers. That was the reason why I decided to tackle this problem, examining a variety
of methods in different domains.
3.1 Hyperbolic Radon transform
Since I was dealing with hyperbolic Radon transform for demultiple task in a
challenging way which was explained earlier, I started to think about using this
transform for stacking purposes as well. This transform is in fact a kind of slant stack
which hypothetically maps each hyperbolic event to a single point in delta-moveout
versus intercept time domain. In order to reach to the stack image we have to do some
sort of moveout analysis in the radon domain instead of ordinary velocity analysis in
the time-velocity domain. Later I found that a similar work had already been done by
Graham Hicks (2001) on synthetic data for AVO analysis purposes. Still I had to solve
the problem of transition between neighbouring time windows in real data. I tried to
pick a corridor including two moveout traces, followed by stacking amplitudes inside
that corridor. This in fact simulates the overlapping time windows in the following
approach.
3.2 Constant moveout stack
The second approach is to do constant hyperbolic moveout correction directly in the
time offset domain. Then it does not require any transformation and hence it could be
faster and more practical. This does not cause any stretch but introduces some other
disadvantages which are less important for very deep targets which are low frequency,
several cycles long and separated enough to allow a window based moveout
correction instead of traditional sample based basis. Later I found that this approach
had also partly been shown by Shatilo and Aminzadeh (2000) on near offset data. I
applied this strategy in ProMAX which does not contain a specific tool for this kind of
processing and I had to combine many different modules in a complicated way.
Different constant moveout correction window sizes are compared in Figure 1. It
shows how the stretch disappeares by the increase in the number of samples
contributing in each window. The window size should be optimized to reduce the
disadvantages of large windows.
(a)
(b)
(c)
(d)
(e)
Figure 1: A synthetic long-offset gather (a) before moveout correction. (b) after standard normal
moveout correction (sample by sample). (c) after constant moveout correction with 10 samples in a
window (160 ms). (d) with 20 samples (320 ms) and e) with 100 samples (1600 ms). Maximum offset is
30 km.
3.3 Iso-moveout curves
The third approach, is the application of iso-moveout curves in time-velocity domain.
They can be mentioned as the projection of vertical traces of hyperbolic Radon
domain into the time-velocity panel. Application of an iso-moveout curve as a
stacking velocity function, gives a non-stretch correction of the considered event. The
equation of this curves can be derived as below:
The graphs of some iso-moveout curves overlaid on a semblance velocity analysis
panel are shown in Figure 2.
Figure 2 : Iso-moveout curves in time-velocity domain overplotted on semblance analysis panel.
The difference of the traditional RMS stacking velocity picking followed by standard
normal moveout correction with the constant velocity and iso-moveout curve picking
and correction is shown in Figure 3. In order to illustrate the difference, resolution of
the semblance panel has been increased.
Figure 3 : Comparison of normal moveout, constant velocity and iso-moveout corrections in timeoffsetdomain(top), and in time-velocity domain (bottom).
3.4 Horizon consistency
Any of the above mentioned methods are capable of providing a stretch-free image of
a particular reflector in a specific time window. Since the validity of the stacking
velocity decreases with the distance from the center of the window, the combination
of several adjacent time windows gives a non-continues image of the dipping
reflectors. One solution to this, is to lock the window to the target reflector. This can
be called either horizon consistency or targeted processing. What I do in this part is
actually a kind of mental inversion; I start with an initial stacking velocity model and
a rough interpretation of the target, after some sequential iterative reconsideration of
velocities and interfaces, I gradually converge to the final velocity model and
interface. Practical application of this technique has shown the image of the Rockall
basement with a previously unseen quality (Figure 4).
Figure 4 : Comparison of different stacks of a part of the Rockall dataset; after standard normal
moveout correction (top), after multi-window (250 ms without overlap) Constant moveout correction
(middle), and after iso-moveout correction in a single horizon consistent window (bottom).
4 Future work
After completion of the above parts, the next task of the study is to apply the same
strategy to obtain the best image of sediments between the basalt sills and the
basement, and also to examine whether or not the reflection from the Mohorovich
discontinuity can be imaged.
4.1 Tomographic inversion
The best available velocity model of the area was derived by Miguel Bosch (2001)
using refracted arrivals. Since the most recent non-stretch stack image does not
entirely confirm the predicted interfaces, some reconsideration of the tomographic
inversion could be very helpful. I am going to re-attempt arrival time inversion based
on the latest image of the area which I would obtain through the above mentioned
processing strategy. The software which I am going to use for this purpose is JIVE3D.
I will need to review the related literature on inversion concepts and also to practice
with the software before running the final job.
4.2 Waveform inversion
Having a new version of velocity model of the area, I could try to apply full waveform
inversion in order to improve the resolution and also validity of the velocity model.
The software that I am going to use for this task is TWIST or TWISTER. Obviously
this task is very time and effort consuming. Previous attempts on the Rockall dataset
show that the converted wavefield is so weak and it is not worthwhile trying to invert
both P and S wavefields. What I am planning to do is application of P-wave-related S
wave inversion.
4.3 Pre-stack depth migration
Having the best possible velocity model from any of the above inversions, Pre-stack
depth migration, as a good tool for final depth imaging, would be successful to some
extent. The output of this process does not need moveout correction, but the
destructive effects of the special kind of stretch during PSDM still remains as a
question that I should work on it.
4.4 Second case study
If time permits, I am going to run the waveform inversion on a dataset from my
country, Provisionally from Persian Gulf where I have detected a positive AVO
response in my previous research. Although it is not a long offset data set, the target is
shallow enough to cover a good range of incidence angles. And despite the presence
of shallow water bottom multiples, amplitude inversion could be run successfully, and
hopefully quantitative characterization of the target reservoir, in the absence of well
log information might be feasible to some extent.
Acknowledgments
BGS Rockall consortium is acknowledged for permission to use the long-offset data
from Rockall trough for this study. National Iranian Oil Company (NIOC) is also
acknowledged for providing sponsorship for this PhD programme.