Feature-Rich Software Open by Design

Feature-Rich Software, Open by Design
Richard Bennett
Benjamin Cornelisse
Robert Merkle
Shell International Exploration
and Production B.V.
Rijswijk, The Netherlands
Jan Egil Fivelstad
Paul Hovdenak
Blueback Reservoir AS
Stavanger, Norway
Trygve Randen
Stavanger, Norway
Oilfield Review Autumn 2009: 21, no. 3.
Copyright © 2009 Schlumberger.
For help in preparation of this article, thanks to Najib
Abusalbi, Marcus Ganz, Susan Lundgren, Andrew Muddimer,
Jeff Rubenstein and Eric Schoen, Houston; and David
McCormick, Cambridge, Massachusetts, USA.
Ocean and Petrel are marks of Schlumberger.
BRIDGE EM is a mark of Blueback Reservoir.
.NET, Visual C# and Windows are marks of Microsoft Inc.
Rock3D and Rock3D Synthetics are marks of Shell.
1. For more: Application Programming Interface, Free On-Line
Dictionary of Computing, http://foldoc.org/Application+
Program+Interface (accessed September 22, 2009).
2. An original developer is the owner of the software. An
independent developer, in this case, adds new features
to the software or uses components of the software to
create another program. Typically, a licensing agreement
is made between the software owner and the independent
developer; however, it may be a free license under
written terms of agreement.
3. Software developers write lines of text called source code.
These instructions, once converted to machine code, are
carried out by the processing unit of a computer.
4. A high-level programming language has a level of
abstraction higher than that of an assembly language,
which itself is a symbolic representation of the machine
language of a specific CPU. Typically, higher level
languages comprise spoken language statements,
usually English, such as for, loop or return.
5. A plug-in is a common computer term that describes a
small software program created to provide additional
functionality to a larger software program. This term
symbolizes the addition of modules to a central program.
Generally, these modules cannot operate without the
core software installed.
46
Providing stable software on schedule requires strict adherence to a development
plan. Development teams decide upon a finite number of features for their products
based on customer requirements; inevitably, some enhancements are not included.
Opening the software development process to external programmers introduces
another avenue for the creation of additional features, one that doesn’t impact
production time or the quality of the primary product.
In an ideal world a piece of software would contain only the features a user needed, consume
only the resources required for the task at hand
and have an interface ergonomically focused on
those features. In reality this is rarely the case
and features are developed to meet the needs of
a diverse audience. As a result, software may be
overly complex for some people and lacking key
components for others; however, a solution exists
that is growing in popularity for both software
developers and users.
Developers can create an application programming interface (API) providing access to
the state and functions of a software program or
operating system.1 Software developers may
choose to create an API to unlock specific elements or all the software to enable users or other
developers to extend its capabilities with new
features. This is beneficial to independent developers for many reasons; they can add features
that they choose, they can do this according to
their own schedule and they can work freely of
the original developer.2
Traditionally, adding new capabilities to a
software program required modification of that
program’s source code.3 Changing the source
code has two major implications. First, the original developer loses the ability to control the
changes made to the primary software. Second,
the proprietary intellectual property (IP) associated with the primary software is publicly
available to anyone with access to the source
code. When working with an API, developers
create new capabilities using a high-level pro-
gramming language.4 Algorithms written by an
independent developer interact with data and
utilities from the primary program via the API.
A software program initially created by one
developer and later extended by an independent
programmer can be illustrated by the set of
functions on a calculator. A developer creates a
simple program comprising addition, subtraction, multiplication and division procedures.
These mathematic functions are supplemented
by sine, cosine and tangent trigonometric
functions later added by an independent programmer. If the original developer provides
access to necessary elements of the calculator
software via an open API, the new functions can
be plugged in to the simple calculator without
changing the source code.5
A benefit of being able to add to software
without changing its original makeup is that
extra features, such as the trigonometric functions in the example, are not essential and can
therefore be turned on and off. Users who require
a simple calculator can enjoy a refined interface
with fewer commands and concepts to comprehend. When needed, the additional features can
be accessed in a variety of ways, such as from the
menu bar.
The concept of dually developed software goes
even further. For example, an application with a
large user base, such as a Web browser, can be
combined with an extensibility API to create a
software ecosystem made up of the original
developer and a community of independent developers who work to extend the core software
Oilfield Review
program. The ecosystem components deliver more
functional value when combined than they do individually, providing mutual benefit to the original
developer, independent developers and users.
In the oil and gas industry, many complex
software packages address the multifaceted
challenges of hydrocarbon recovery. For example,
Petrel seismic-to-simulation software contains
tools for many elements of geology and geophysics
(G&G) workflows. With each new version the
development team adds features to satisfy the
technical demands of the industry and to
address software efficiency, reliability and userfriendliness. Project managers make difficult
decisions in determining which of the many
proposed features will be developed and which
will not.
To provide users with more G&G workflow
capabilities, Schlumberger recently created an
API to open the Petrel software to third-party
software vendors. This enables the company’s
developers to concentrate on primary functionality while independent developers provide additional
components in the form of plug-ins. New modules
vary in complexity. A simple time-saving algorithm that automates a manual data-manipulation
process can be created in minutes by anyone with
basic programming skills. However, a plug-in providing more sophisticated capabilities such as
electromagnetic modeling requires a greater
commitment by a team of programmers and oilfield experts.
Opening the software benefits both independent developers and Schlumberger because it
disconnects the plug-in development process
from the Petrel release schedule. Therefore, new
features can be created and used when they are
available, and the IP of new features always
remains in the hands of the owners. The freedom
to build upon this software is provided through
the Ocean application development framework.
The Ocean framework, based on industrystandard programming tools such as the
Microsoft .NET framework and Visual C# language, provides a programming interface to the
inner workings of the Petrel suite. Independent
programmers can write their own algorithms to
interact with existing components—such as
property modeling or volume calculation—and
then display the results of the interaction within
the soft­ware environment.
This article describes the concept of open
software and how it is being used to enhance the
capabilities of complex software. The first case
study demonstrates a client’s use of the Ocean
framework to develop new rock physics capabilities. The second study focuses on its use by an
Autumn 2009
47
Real-time geosteering
Prestack seismic data processing
Seismic inversion
Reservoir simulation
Rock physics and synthetic
seismic modeling
Poststack seismic interpretation
Well planning
Reservoir modeling
Electromagnetic data interpretation
> Example G&G workflow from screenshots. The process begins by importing and interpreting information including seismic data (top left) and electromagnetic data (bottom left). Seismic inversion may then be performed before the reservoir model is constructed. Some steps in the workflow affect others;
for example, synthetic seismic data (center) are generated to confirm the accuracy of reservoir model properties. If a significant misalignment exists, the
model can be updated and the checking process repeated. It is important to identify such problems early in the workflow. Reservoir simulation (right) is
an expensive and time-consuming process; this step and several before it must be repeated if mistakes were made in building the reservoir model. With
information on the planned wells (bottom right) available during the drilling phase, it is now possible to react to real-time LWD data (Real-time geosteering,
top). Communication among experts working in different domains on a shared data model is an effective means of avoiding, identifying and correcting errors.
A centralized data model and unified software package, which can manage all workflow steps, help domain experts in tackling such problems. A familiar
shared system also improves user efficiency when resolving any issues.
independent software vendor to create an electromagnetic modeling module. Also discussed is
the adoption of the Ocean framework by the
academic community.
G&G Software Options
A typical geology and geophysics workflow
involves gathering data from a variety of sources,
processing the data and then combining the
results for interpretation. This workflow is not
unidirectional or one-dimensional; discoveries
late in one branch of the workflow can require
past processes to be revisited or input data to be
changed (above).
48
From start to finish, the work undertaken in
a G&G project may take several months to complete. Many geologists, geophysicists, engineers
and stakeholders work on a project to plan one
or more wells. The process relies heavily on software to handle intensive tasks such as inverting
for rock properties from borehole readings
or picking horizons through seismic stacks.
Because of the complex nature of each task, an
E&P company may choose several software packages to meet all its workflow requirements. In
addition, some companies develop their own
software or algorithms to deal specifically with
challenges relating to conditions in a particular
geographic area.
Using several different software programs to
complete a project increases the risk of data
migration–related errors, such as those that may
occur when saving results from one program and
importing them into the next. Training scientists
to use several different programs is also not ideal.
A simple solution is to have one software package
that handles all elements of the G&G workflow.
But it is unlikely that a single software package
can meet the needs of every user; E&P companies
have their own specific requirements based on
their asset portfolios.
One way to make using several applications
less complicated is to create real-time links
Oilfield Review
Goal: Add two numbers together and return the answer
Target software code
• Create integer variables A, B and C
• Make A = 2 and B = 3
• Call API code: C = Add (A and B)
• Function Add (integer X and Y)
• Create integer variable Answer
• Answer = Addition (X and Y)
• Return Answer
• Function Addition (integer i1 and i2)
• Create integer variable i3
• i3 = i1 + i2
• Return i3
Version 1
API code
• Create integer variables A, B and C
• Make A = 2 and B = 3
• Call API code: C = Add (A and B)
• Function Add (integer X and Y)
• Create integer variable Answer
• AddTwoIntegers (X, Y and Answer)
• Return Answer
• Function AddTwoIntegers (integer i1, i2 and i3)
• i3 = i1 + i2
Version 2
Independent programmer’s code
Code unchanged
Public code unchanged, private code changed
to reflect change in target software code
Private code changed
Translation effect
> The job of an API. An independent programmer creating code (left) needs
to use a function in a target software program (right) but does not have
permission. The owner of the target program creates an API (center)
providing access to the private code. The API shares data, software event
states and program functionality without compromising the target program.
In this example the target software is updated from Version 1 to Version 2.
The independent program is unaware of the changes and remains the same.
Without the API this connection would break. With due diligence the target
software developer updates the API to respect the change in function name
between the individual software programs. In
this symbiotic approach intermediate data are
shared across programs, making it easier to identify data migration issues. This method also
reduces lost time during import and export routines since it is handled automatically by the
applications. An extension of this concept allows
each program to control some features in the
next. This is especially helpful for algorithms that
may have no graphical user interface (GUI). In
this case one software package acts as the host,
and its GUI can be used to control an algorithm
with no interface, which is easier for some users
than writing text-based commands.6 In addition,
such algorithms can be created more rapidly
without having to develop and debug a GUI.
A potential problem with the symbiotic
approach is that proper functioning of links
across programs may depend on their release versions. Most software changes when a new version
is released; consequently, links across applications might be completely broken or at least
become problematic. Although some linked software is made by just one company, many packages
Autumn 2009
from Addition to AddTwoIntegers and the data variable parsing from X and Y
to X, Y and Answer. However, these functions and data variables are
masked to behave in the same way as in Version 1 and therefore are
compatible with the independent program. As long as the independent
program and the target software continue to conform to the API language,
any changes made to these programs will not break the link between them.
This is an important feature because software is often upgraded without
synchronization among the developer companies.
are developed by more than one. It is difficult to
align the development paths of software projects.
If a planned update is postponed or altered, this
will likely affect the interactions among all
connected applications.
A response to this problem is to create an
interface language that does not change very often
and that enables communication with a software
program. Application programming interfaces
provide access to the functionality of software
packages, and they are also good communication
languages. Essentially they can be thought of as a
translator: A publicly declared input language is
converted into a private language, which is then
used in a software program. The public elements of
an API make up the communication interface,
which changes infrequently, while the private elements of an API can change as often as necessary.
Private elements are frequently changed and new
ones created to add functionality and improve
software stability (above).
For workflows that require several software
programs to work together, sharing data and
controlling features, an API is well suited to
maintaining a stable relationship among them
all. Development guidelines for GUI design, data
types and event states are provided with frameworks and can help to create and maintain
relationships between plug-ins.
While an API goes a long way toward providing a development environment that maintains
stable links between software updates, it is still
not a perfect solution. Not only do software programs change over time but programming
technologies and computer hardware do as well.
For example, multithreading has become a viable
programming technology for workstation software because mainstream CPUs now contain
several cores. Multithreading is a significant
change to single-threaded applications. These
changes may require an API to be rewritten, thus
breaking any links with current software.
6. Text-based interfaces require users to memorize the
name of each function or spend time looking up the
name. A single mistyped command can cause an entire
algorithm to either stop working or return an inaccurate
result. Well-made GUIs eliminate or constrain syntax
errors made by users; however, most GUIs lack features
to stop users from making logic-based errors.
49
> Closed loop process. The acquired seismic volume (left) is used to create the framework of the reservoir model (center). A synthetic seismic volume (right )
is generated from the reservoir model. It is then compared with the acquired version to calibrate assumptions made about model properties, and the process
is repeated.
Reservoir Model Verification Using
Synthetic Seismic Data
The process of building reservoir models can be
lengthy. Data from multiple sources are input to
shape an electronic model by expert interpretation and a series of algorithmic transformations.
Model parameters and their uncertainties may
be subjective, and because of the range of uncertainties, many geologic model realizations may
not align closely with the original input data.
To ensure the quality of its geologic models,
Shell has implemented a new model verification
workflow. Modelers generate synthetic seismic
data from geologic models using a forwardmodeling process. The synthetic data can then be
compared with the original seismic data to verify
alignment and identify mismatches relating to
reservoir geometry, reservoir thickness and
property distribution.
Case Study A
Case Study B
To protect the interests of independent developers, software owners can restrict any significant
updates to longer development times, such as making these changes over a two- or five-year period.
Acquired
Synthetic
Acquired
Synthetic
> Case studies. The Rock3D Synthetics module provided the synthetic seismic volumes for both case
studies. In each case geophysicists display acquired seismic data alongside synthetic data by
modeling the properties on vertical slices. The highlighted regions (white circles) show a lack of
alignment; in both cases the reservoir model properties were recalibrated and the process was
repeated until they aligned.
50
Shell uses the Petrel software suite as its
primary platform for geologic reservoir modeling
and has incorporated this new proprietary workflow as a module of its own modeling workflow.
This approach has enabled its software developers to take advantage of existing modeling tools
and concentrate on delivering new rock physics
functionality. By making use of existing capabilities, Shell was able to reduce development time
compared with that required to create a standalone application. Two new plug-ins were created:
the Rock3D and Rock3D Synthetics modules.
The user interface was an important design
criterion for the Rock3D plug-ins; conforming to
existing interface behavior helps to reduce training time and also improves user efficiency. The
Ocean framework provided a set of design tools
and guidelines to help Shell designers build an
interface with the same appearance and response
as the Petrel software, which their geologic
modelers were already trained to use.
Generating synthetic seismic data is a two-step
process using the new workflow. In the first step, a
modeler inputs rock and fluid properties from the
existing reservoir model into the Rock3D module.
Acoustic properties and impedances are then
generated using relationships between the model
properties and the acoustic properties, such as
velocity and bulk density. In the second step, the
modeler uses the Rock3D Synthetics module to
generate a synthetic seismic cube from these
acoustic properties (above).
Executing this workflow directly within the
Petrel modeling environment has many advantages. The reservoir geologist and geophysicist
discuss the set of rock properties to be applied.
This facilitates greater understanding between
earth science disciplines on how the geologic
Oilfield Review
model was built and on the uncertainties surrounding the seismic data used to constrain the
model. Working within the same modeling environment also facilitates discussion among the
reservoir modeler, petrophysicist and seismic
interpreter to identify where changes to the
input interpretation may improve the model
quality. Finally, a historical audit trail of each
modeling step created by the Petrel software
provides details of interpretation and modeling
decisions taken at each stage of the project.
Typically, interpreters perform a seismically
constrained quality assessment early in the process of building the reservoir model. This ensures
that models built from measurements made at the
high-resolution well log scale are consistent with
the lower resolution seismic response prior to
advancing to more detailed modeling within the
G&G workflow. The quality assessment also
informs decisions on the level of seismic inversion
needed for any modeling project, optimizing
application of Shell proprietary seismic inver­
sion technology.
The new verification workflow demonstrated
a mismatch between synthetic seismic and processed data in a number of models (previous
page, bottom). By resolving these large-scale
modeling issues earlier in the process than possible using traditional approaches, Shell
potentially saved a significant amount of time in
project delivery.
Electromagnetic Modeling
Independent software developers can make
use of the Ocean framework to deliver software
products and take advantage of the large Petrel
user base. Blueback Reservoir, a reservoir
modeling consulting company, formed a software
development team to capitalize on this market in
2007. The team’s first project was in collaboration with a company that supplies electromagnetic
services, Electromagnetic GeoServices. The new
software product adds electromagnetic modeling
(EM) capabilities to the Petrel suite and is commercially available as the BRIDGE EM Data
Integrator plug-in. The new capabilities demonstrate the power of combining a technologyspecific application within a broad-use modeling
package, as applied by a third-party developer.
When the project began, the Blueback
Reservoir team had no EM experience; therefore,
all domain knowledge was provided by the service
company. Although the framework delivered the
capabilities to create the new module without
Schlumberger collaboration, the software development team used the Ocean support Web site
throughout the project for technical questions.7
Autumn 2009
One of the project’s major challenges was that
it needed an entirely new data type to represent
the electromagnetic data. The two main requirements were the visualization of the data in all
supported displays such as the Petrel 2D and 3D
canvases, and data handling between connected
functions such as 3D grid property modeling and
volume and orthographic slice probe creation.8
Electromagnetic data provide resistivity properties throughout survey areas by detecting waves
that have propagated through the subsurface of
the Earth.9 Two basic methods for EM surveys are
available: magnetotellurics, which uses naturally
occurring EM waves caused by the interaction of
the solar wind and the Earth’s magnetosphere, and
a newer technique, controlled-source electromagnetics (CSEM), which incorporates an artificial
source of EM waves. Using the new BRIDGE EM
plug-in, modelers can combine CSEM inversion
data efficiently with seismic and gravity surveys to
enhance model calibration (right).
There are several main steps involved when
executing a CSEM project using the new plug-in.
First, a geoscientist undertakes a feasibility study
to evaluate whether a CSEM survey is likely to
provide data signals that are of sufficient quality
to be interpreted. Considerations for the study
include the presence of salt, large topographic
variation of the seabed and highly faulted structures, all of which interfere with the CSEM
signal. The investigation is performed in the
Petrel modeling environment: The geoscientist
creates a resistivity model based on existing
knowledge of subsurface structures and rock
types that is obtained from seismic and geologic
surveys and well logs. The model is then evaluated by a team of experts to determine whether
to proceed with a CSEM survey.
The next phase is to plan the field operation.
CSEM receivers are placed directly on the
seabed. The BRIDGE EM module in the Petrel
environment allows a survey planner to identify
good locations for receivers: align with areas of
geologic interest, avoid subsurface structures
that attenuate CSEM signal and identify seabed
locations that are unfit for receiver deployment.
The planner also uses the plug-in to plot the
optimal course for the source vessel, which will
tow a CSEM transmitter over the survey area in a
specific configuration.
Once a survey is completed, the data undergo
QC and the CSEM attributes such as electric
magnitude and phase are interpreted. It is during
this stage that data are integrated with the reservoir model. The QC and interpretation processes
are BRIDGE EM plug-in capabilities, designed to
> Usefulness of EM data. When calibrated with
seismic survey data (top right ) and wellbore
resistivity measurements (top left), surfaceacquired EM data are used to generate a model
of resistivity zones over a large survey area.
When EM inversion is performed, a resistivity
volume is created. It can then be visualized using
several modeling tools. These resistive formations
(purple 3D object and red, orange and yellow
colors overlaid on seismic slice, bottom) may be
salt, basalt or hydrocarbon-bearing zones.
streamline the time-consuming processes as
much as possible. CSEM results are typically calibrated with existing information, such as seismic
and well log data, within the Petrel workflow.
7. The BRIDGE EM product is independent of the EM plug-in
bundled with WesternGeco EM services.
8. Canvas, in computer graphics terms, describes a region
that can be painted into with electronic content such as
2D well logs, charts, graphs or 3D objects. A 3D grid is a
representation of the reservoir divided into an irregular
grid of 3D cells. Each cell can contain multiple reservoir
properties, such as porosity, permeability and resistivity.
This discrete grid is designed to simplify computational
algorithms and to work within current computing
capabilities. An orthographic slice is a 2D flat plane that
is locked to an axis; for example, vertical slices are
parallel to the z-axis. A volume extends a plane in three
dimensions visualized as stacked slices or a cloud of
voxels—3D pixels. Slice and volume probes are textured
with property values that represent the intersection of
the probes with the reservoir structure.
9. Brady J, Campbell T, Fenwick A, Ganz M, Sandberg SK,
Buonora MPP, Rodrigues LF, Campbell C, Combee L,
Ferster A, Umbach KE, Labruzzo T, Zerilli A, Nichols EA,
Patmore S and Stilling J: “Electromagnetic Sounding
for Hydrocarbons,” Oilfield Review 21, no. 1
(Spring 2009): 4–19.
51
CSEM survey planning
Quality control and interpretation
Calibration
Receivers
> BRIDGE EM workflow from screenshots. The module provides planning and QC tools specific to
CSEM operations within the Petrel suite. Surface resistivity measurements are normalized and used to
identify the locations that provide good signal for CSEM receivers (bottom left ). Chosen receiver
locations are then used to plot towing paths for CSEM surveys (top left). After a survey, the application
provides data formats for import of EM data, such as field strength and phase, allowing a QC check
through a user interface that has been optimized to improve the efficiency of the process (top right).
The BRIDGE EM plug-in provides a new EM data type for the Petrel data model, enabling the user to
view the CSEM survey alongside well logs and seismic data (bottom right).
A volumetric cube of resistivity data can be
created from inverted 3D CSEM data. Using
Petrel modeling tools, a geoscientist can identify
3D resistivity compartments. When these are
compared with seismic data, the results can pro­
vide an oil and gas company with the information
it needs to move on to well planning (above).
The initial team working on the BRIDGE EM
Data Integrator module comprised two devel­
opers, who created a prototype. Two more
developers joined the team to complete the com­
mercialization process, which included software
testing, document creation, user training and
user support. The prototype development took
four months; this reflects the length of time
needed to create and properly integrate a new
complex data type. Blueback Reservoir continues
to provide new tools using the Ocean framework.
52
The Role of Academia in R&D
Many industries share the view that academic
institutions can play an increasing role in their
R&D programs. Especially during periods of eco­
nomic downturn, research faculties typically
have more freedom than industry to perform fun­
damental studies and investigations into creative,
new and abstract areas. Academic research has
several entry points for companies to engage.
Enlisting the skills of a college student is a lowlevel entry point, and involving a professor, a
postdoctoral researcher and a research assistant
is an entry of much higher level. This flexibility
enables sponsors to identify the level of research
they require and narrow the level of investment
needed for a project. This workflow might also be
beneficial for companies exploring blue-sky
research, which is becoming much harder to jus­
tify outside of academia.
There are other ways this connection between
a company and academia benefits both col­
laborators. Universities become more aware of
the needs of employers and may choose to
adjust their academic programs accordingly.
Furthermore, students who are involved in
company research projects become strong candi­
dates for future recruitment.
It is sometimes difficult for companies and
academic institutions to reach acceptable terms
of agreement for sharing information. For joint
development of E&P software, safeguards must
be emplaced to protect the sponsor’s IP while
providing universities access to information they
need to develop new ideas. The Ocean frame­
work, which is based on API tech­nology, enables
universities to develop G&G software in the form
of plug-ins that can interact with other plug-ins
provided by the sponsor company. With this
model, companies can protect their IP with the
framework and extend their G&G workflows
through investment in university research pro­
grams. By using the framework tools and
functionality provided by existing plug-ins and
the Petrel suite, universities can build on top of
them. This building block can save a great deal of
development time when compared to writing all
this functionality again each time.
In August 2009, Schlumberger launched an
initiative to strengthen industry relations with
academia. The Ocean for Academia Program,
with support of oil and gas companies, engages
universities in select R&D topics. These
categories include information sciences and
technologies, petroleum engineering, earth sci­
ences and cognitive sciences. The program helps
the industry to establish collaboration with uni­
versities for the development of specific Petrel
plug-ins to enhance G&G workflows.
Looking Ahead
Open frameworks based on API technology are
building a development culture that is mutually
beneficial to the original software owners,
independent software vendors and users of the
technologies produced. For example, frameworks
can be used by companies to develop software
functionality that may be unique to their needs.
If those needs are not unique, the developer
may choose to make the functionality commer­
cially available, thereby entering competition
with other developers. In such an environment
competition can drive innovation, improve qual­
ity and lower cost. Since the Ocean framework
was introduced, it has been used to create more
than 100 new modules. These modules have been
created by Schlumberger, oil and gas companies,
academic institutions and independent soft­
ware vendors.
—MJM
Oilfield Review