Output

MINISTRY FOR EDUCATION AND SCIENCE, RUSSIAN FEDERATION
FEDERAL STATW AUTONOMOUS ORGANIZATION OF HIGHER EDUCATION
«NOVOSIBIRSK NATIONAL RESEARCH STATE UNIVERSITY»
(NOVOSIBRSK STATE UNIVERSITY, NSU)
Faculty
Economics
Chair
Political Economy
Department
38.04.02. Management
Master Educational program
Oil and gas Management
GRADUATE QUALIFICATION PAPAER
MASTER'S DISSERTATION
Kvasova Alena Sergeevna
Paper title
Application of artificial intelligence for the predicting of CO2
emissions from energy consumption
«Admitted to defense»
Scientific Supervisor,
The head of the chair:
Ph.D., Assoc. Professor
D-r of Econ. Sciences, Professor
Provornaya I.V./………...
Filimonova I.V./…………..
«……»………………20…г.
«……»………………20…г..
Date of defense: «……»………………20…г.
Novosibirsk, 2017
2
INTRODUCTION .................................................................................................................................... 3
CHAPTER 1. ANALYSIS OF APPROACHES AND METHODS OF DEVELOPMENT AI ............. 5
1.1
TERMINOLOGY AND HISTORY OF ARTIFICIAL INTELLIGENCE DEVELOPMENT ............................... 5
1.2
USSR AND RUSIAN EXPERIENCE .................................................................................................. 8
1.3
FOREIGN EXPERIENCE ................................................................................................................ 10
CHAPTER 2. METHODOGICAL APPROACH OF BUILDING AND LEARNING NEURAL
NETWORK AND IT’S APPLICATION IN OIL AND GAS INDUSTRY .......................................... 14
2.1
GENERAL PRINCIPLES OF BUILDING NEURAL NETWORKS ............................................................ 14
2.1.1 Processing units ..................................................................................................................... 15
2.1.2. Connections between units ................................................................................................... 15
2.1.3. Activation and output rules .................................................................................................. 16
2.2
TRAINING OF ARTIFICIAL NEURAL NETWORKS ............................................................................ 19
2.2.1. Modifying patterns of connectivity ...................................................................................... 19
2.3
APPLICATION OF THE NEURAL NETWORKS IN OIL AND GAS INDUSTRY ........................................ 20
2.4
AI APPLICATIONS IN DRILLING SYSTEM DESIGN AND OPERATIONS........................................... 21
2.5
AI IN WELL PLANNING OPERATIONS ........................................................................................... 21
CHAPTER 3. THE APPLICATION OF NEURAL NETWORK FOR THE PREDICTING CO2
EMISSIONS FROM ENERGY CONSUMPTION ............................................................................... 26
3.1
CHARACTERISTIC OF FACTORS THAT INFLUENCE CO2 EMISSIONS .............................................. 26
3.2
THE PROPOSED MODEL FOR ESTIMATION CO2 EMISSION PROBLEM ............................................ 28
CONCLUSION ...................................................................................................................................... 32
REFERENCES ....................................................................................................................................... 33
3
Introduction
The global energy market structure has changed dramatically. The sharp decline in oil prices
over the past two years - is not the result of someone's conspiracy, this is the new market equilibrium
that has occurred as a result of innovative breakthroughs in oil and gas production. Accordingly, the
advantage will be the one who will be able to quickly adapt to new realities - to reduce costs and
improve production efficiency. Until now, the best change was shale revolution. Technology will
become a new engine of growth. From "smart fields" to the price forecasts - the methods of artificial
intelligence is often referred to one of the driving forces of technological breakthrough in oil and gas
industry.
Nowadays, the main vector of development is directed towards what can be called the fast track,
“digitization” of the oil industry - automation, reducing the direct involvement of people in the
increasing number of processes, and (most importantly) reduces the "human factor" and the probability
of errors in management decisions. Technologies, which are based on artificial intelligence, allow
dealing with these tasks.
Main fields of application artificial intelligence in the oil and gas industry can be divided into three
areas: exploration, production and strategic planning. In exploration the use of artificial intelligence
allows more effectively interpret seismic data and exploration drilling. As a consequence, it reduces
the number of wells drilled and tests conducted to determine the characteristics of the deposits,
resulting in savings of time and money.
The relevance of the research paper caused by the fact that climate pollution due to the
carbon emission became an important and serious problem that affects the countries from the different
aspects, health, climate, agriculture, economics, and tourism. Since the 1965 the amount of CO2 is
rapidly growing. Many scientists consider the global warming due to CO2 emission is dangerous and
threat the world more than terrorism. There is a direct link between the growth of carbon dioxide
emissions and the increase in the average global air temperature.
Adjusting the energy policies is a necessary process to void pollution problem, and keeping the
atmosphere clear and clean
The degree of elaboration of the theme is increasing every year. Artificial intelligence gained its
development in 50s, and since then a huge amount of talented scientists from different countries
reached a great progress in creating intelligent machines and deep learning. The result of their work
was creating such systems as neural networks, expert systems, fuzzy logic, natural language systems
and others. The application in E&P industry has more than 16 years of history with first application
4
dated 1989 for well log interpretation; drill bit diagnosis using neural networks and intelligent
reservoir simulator interface. It has been propounded in solving many problems in the oil and gas
industry which includes, seismic pattern recognition, reservoir characterisation, permeability and
porosity prediction, drill bits diagnosis, estimating pressure drop in pipes and wells, optimization of
well production, well performance, portfolio management and general decision making operations and
many more.
The aim of this work is the investigation of features of application artificial intelligence in the
oil and gas industry and the examining of methodological approach of building and learning the neural
networks.
To achieve this goal in the course of the study it is necessary to solve a number of tasks:
1. To investigate the terminology of AI and to highlight a summary of various papers and articles
in order to assess the impact on the development of AI different Russian and foreign scientists
2. To study the theoretical foundations of building and learning the neural networks and highlight
some benefits of artificial intelligence application in oil and gas industry.
3. Analyze the modern ways and computer software of the application of neural networks.
4. Providing a solution to forecast the poison CO2 gas emerged from energy consumption. Make
a conclusion about effectiveness and perspectives of this model for industry. Develop some
proposals for the oil companies.
The subject of the study is the methodological foundation of constructing and learning neural
networks
The object of the study is the amount of carbon dioxide emissions caused from energy consumption
The structure of the course paper reflects the unity, content, logic, and the results of the study on the
problem of application of artificial intelligence. The main structural elements of the paper are: the
introduction, 3 chapters, conclusion, and list of references.
The novelty of the work caused by the fact that in Russia, application of Artificial Intelligence gained
its development relatively not long time ago, due the changing in oil and gas industry and sharp fall in
oil prices. This new technologies will become vital for every country and oil and gas operators, and not
only will help to control and decrease environmental damage from consumption of energy but also
bring them technological competitive advantage.
5
Chapter 1.
1.1
Analysis of approaches and methods of development AI
Terminology and history of Artificial intelligence development
Artificial Intelligence (AI) has attracted interest from researchers for more long before the
present century. Only in the middle of the 20th century when computer technologies gained the active
development, fundamental and application-oriented operations on Artificial Intelligence became
possible
The study of artificial intelligence (AI) is a scientific field, located at the crossroads of a number of
disciplines: сomputer science, philosophy, cybernetics, psychology, mathematics, physics, chemistry,
etc. The term "artificial intelligence" is usually used to refer to the ability of a computer system to
perform the tasks inherent human intelligence (for example, problems of inference and learning). Any
task, algorithm of solving problems which is not known in advance (or its data is incomplete), can be
attributed to problems in the field of AI. This, for example, playing chess, reading the text, the text
translated into another language, and so on. [10]
The term of "artificial intelligence" - was proposed by John McCarthy in 1956 at the workshop with
the same name in Dartmouth College (USA). [7] The seminar was devoted to the development of
methods for solving logic rather than computing tasks. In English, this phrase does not have the
slightly fantastic anthropomorphic coloring which it has acquired a rather unsuccessful Russian
translation. The word intelligence means "the ability to talk intelligently", rather than "intelligence",
for which there is intellect term.
In explaining his determination, John McCarthy points out: "The problem is that while we can not as a
whole to determine which computational procedures we want to call intelligent. We understand some
of the mechanisms of intelligence and do not understand the rest. Therefore, under the intelligence
within that science refers only to the computational component of the ability to achieve goals in the
world " [12]
Lets consider one of the possible definitions of artificial intelligence. Artificial Intelligence (AI) - is a
software system that simulates human mindset on the computer. To create such a system is necessary
to study the process of human thinking, solving particular tasks or decision-making in a particular area,
to identify the main steps of the process and to develop software tools that reproduce them on the
computer.
6
A member of the Russian Association for Artificial Intelligence gives the following definition of
artificial intelligence [11]:
1) Scientific direction in which are formulated and solved the problem hardware or software
simulations of human activities that have traditionally been considered to be intelligent.
2) Property intelligent systems to perform functions (creative), which are traditionally considered the
prerogative of man. This intelligent system - it is a technical or a software system that can meet the
challenges traditionally considered creative, belonging to a particular domain, knowledge of which is
stored in the memory of such a system. Intellectual structure of the system consists of three main
blocks - knowledge base, solver and intelligent interface that allows for a communication with a
computer without special software for data entry
There are several types of artificial intelligence, among which are the three main categories:
1. ANI, Artificial Narrow Intelligence. It is an AI, specializing in one particular area. For example, can
beat the world chess champion in a chess game, but that's all what it can do.
2. AGI, Artificial General Intelligence. This AI is a computer whose intelligence resembles a human,
that is, it can perform all the same tasks as men. Professor Linda Gottfredson describes this
phenomenon as follows: "General AI embodies the generalized thinking skills, among which also
notes the ability to justify, plan, solve problems, think abstractly, to compare complex ideas, learn
quickly, use that experience."
3. ASI, Artificial Superintelligence. Swedish philosopher and a professor at Oxford University Nick
Bostrom gives the following definition of superintelligence "This is intelligence that surpasses the
human in almost all areas, including scientific inventions, general knowledge and social skills" [13]
In AI allocate the following directions of development:

Expert Systems (ES), or sometimes referred to as a system based on knowledge (SBK);
 Natural-language systems (NL-system);

Neural networks (NN);

Fuzzy systems (fuzzy-logic),
 Evolutionary methods and genetic algorithms
 Knowledge exctraction system
7
Some of the background work for the field of neural networks occurred in the late 19th and early 20th
centuries. This consisted primarily of interdisciplinary work in physics, psychology and
neurophysiology by such scientists as Hermann von Helmholtz, Ernst Mach and Ivan Pavlov. This
early work emphasized general theories of learning, vision, conditioning, etc., and did not include
specific mathematical models of neuron operation. [1]
McCulloch and Pitts were followed by Donald Hebb, who proposed that classical conditioning (as
discovered by Pavlov) is present because of the properties of individual neurons. He proposed a
mechanism for learning in biological neurons.
After the first computers were created their possibilities in terms of the speed of calculations have
appeared to be more than human, therefore in scientific community the question has arisen: what
borders of opportunities of computers and whether machines will reach the level of human
development? In 1950 one of pioneers in the field of computer facilities, the English scientist Alan
Turing, writes article under the name "Can Machines Think?" [3], which describes the procedure by
which it will be possible to determine the moment when the machine will be equal in terms of
intelligence with a human, called the Turing test.
The first practical application of artificial neural networks came in the late 1950s when Frank
Rosenblatt and his colleagues demonstrated their ability to perform pattern recognition.
Interest in neural networks had faltered during the late 1960s because of the lack of new ideas and
powerful computers with which to experiment. During the 1980s both of these impediments were
overcome, and research in neural networks increased dramatically. New personal computers and
workstations, which rapidly grew in capability, became widely available. In addition, important new
concepts were introduced. Since then, artificial neural networks have been improved and applied in
aerospace, automotive, defense, transportation, telecommunications, electronics, entertainment,
manufacturing, financial, medical and the oil and gas industry to name a few. [1]
Information technology actively and successfully penetrates into all spheres of human activity.
Artificial intelligence is an integral part of computer science, but it is significantly expanding its
capabilities, allowing you to solve the problem of poorly formalized.
Soon after the recognition of artificial intelligence a separate branch of science was divided into two
areas: neurocybernetics and "black box Cybernetics". These areas are developing almost
independently, significant differences in methodology and technology. And only now become apparent
tendency to unite these parts back together.
8
1.2
USSR and Rusian experience
Russian nobleman, an inventor - Semen Nikolayevich Korsakov (1787-1853) –
who was
involved with an early version of information technology and artificial intelligence pioneer, set the
task of strengthening intelligence capabilities through the development of scientific methods and
devices, in common with the modern concept of artificial intelligence, as a natural amplifier. In 1832
S.N. Korsakov published a description of the five, invented by him mechanical devices, so-called
"intelligent machines", for the partial mechanization of mental activity in search of problems,
comparison and classification. In the design of his machines Korsakov first time in the history of
computer science applied perforated cards, which played the role of a kind of data bases, and the
machines themselves are essentially forerunners of expert systems [4-5].
In 1965-1980 was born a new direction - situational control (corresponding to the representation of
knowledge in the western terminology). The founder of this scientific school became prof. - Pospelov
D.A. Were crated special models of knowledge representation [Pospelov 1986].
Thus that the relation to new sciences in the Soviet Russia always was alerted, the science with such
"defiant" name hasn't avoided this fate too and has been given a hostile reception in Academy of
Sciences [Pospelov, 1997]. Fortunately, even among members of Academy of Sciences of the USSR
there were people who weren't frightened of so unusual phrase as the name of the scientific direction.
Two of them have played a huge role in fight for recognition of AI in our country. It were
academicians A. I. Berg and G. S. Pospelov. [7]
After 1960s at the Moscow University and the Academy of Sciences carried out a number of
pioneering research, headed by Veniamin Pushkin and DA Pospelov. Since the beginning of 1960
M.L. Tsetlin and colleagues developed issues related to the training of finite automates.
In 1964, in Leningrad was published Sergey Maslow’s "Inverse method for the classical predicate
calculus," which first proposed methods automatically search for proofs of theorems in the predicate
calculus.
Only in 1974 at Committee on the system analysis at presidium of Academy of Sciences of the USSR
the Scientific council on a problem "Artificial intelligence" has been created, it was headed by G. S.
Pospelov, D. A. Pospelov and L. I. Mikulich have been elected his deputies. M. G. Gaaze-Rapoport,
Yu. I. Zhuravlev, L. T. Kuzin, A.S. Narinyani, D.E. Okhotsimsky, A. I. Polovinkin, O. K. Tikhomirov,
V. V. Chavchanidze were a part of council at different stages. [7]
In the late 1970s created Dictionary of Artificial Intelligence, three-volume reference on Artificial
Intelligence and Encyclopedic Dictionary of computer science, in which the sections of "Science" and
"Artificial Intelligence" includes, among other sections of the computer science. The term "computer"
9
in the 1980s became widespread, and the term "cybernetics" is gradually disappearing from
circulation, remaining only in the names of the institutions that have emerged in the era of "cybernetic
boom" of the late 1950s - early 1960s [6]. This view of artificial intelligence, wasn’t supported by
West due to the fact that boundaries of sciences are differ there.
In 1980-1990 active researches in the field of representation of knowledge are conducted, languages of
representation of knowledge, expert systems (more than 300) are developed. At the Moscow
University the recursive functions language - REFAL language is created by V.F. Turchin.
In 1988 created the AIA - Artificial Intelligence Association. Its members include more than 300
researchers. President of the Association unanimously elected D.A. Pospelov, an outstanding scientist,
whose contribution to the development of artificial intelligence in Russia can not be overestimated.
The largest centers - in Moscow, St. Petersburg, Pereslavl, Novosibirsk. The Scientific Council of the
Association consists of the leading researchers in the field of AI - VP Gladun, VI Gorodetsky, GS
Osipov, E. Popov, VL Stefanyuk, VF Khoroshevsky, VK . Finn Tseitin, A. Ehrlich, and other
scientists. In the framework of the Association conducted a lot of research, organized by the school for
young professionals, seminars, workshops, every two years going joint conferences, published
scientific journal. [7]
The level of theoretical research in Russia on artificial intelligence was no way below the world.
Unfortunately, since the 80s on applied research is beginning to affect the gradual lag in technology.
At the moment, the lag in the development of intelligent systems industry is about 3-5 years.
Table 1.1 The periodization of the achievements of Russian scientists in the study of AI
Period
Authors
Achievement
1956
Scientists from Dartmouth
College
The term "artificial intelligence" - (AI) was
proposed at the workshop of the same name
After 1960s
1964
1965-1980
1974
Veniamin Pushkin and D.A. At the Moscow University and the Academy of
Pospelov, M.L. Tsetlin and Sciences carried out a number of pioneering
colleagues
researches. Tselin developed issues related to the
training of finite automates.
Sergey Maslow
in Leningrad was published Maslow’s "Inverse
method for the classical predicate calculus," which
first proposed method automatically search for
proofs of theorems in the predicate calculus.
Prof. D.A. Pospelov
Was born a new direction - situational control
(corresponding to the representation of knowledge
in the western terminology). Were crated special
models of situation representation – representation
of knowledge.
G.S.
Pospelov. At Committee on the system analysis at presidium
Members of council: D.A. of Academy of Sciences of the USSR, Scientific
Pospelov L.I. Mikulich, M.G. council on a problem "Artificial intelligence" has
10
Gaaze-Rapoport,
Yu.I. been created, it was headed by G. S. Pospelov.
Zhuravlev, L. T. Kuzin, A.S.
Narinyani, D.E. Okhotsimsky,
A.I.
Polovinkin,
O.K.
Tikhomirov,
V.V.
Chavchanidze
1980-1990
1988 – till now
2014
V.F. Turchin.
Active researches in the field of representation of
knowledge are conducted; languages of
representation of knowledge, expert systems (more
than 300) are developed. At the Moscow
University the recursive functions language REFAL language is created
President - D.A. Pospelov.
Members V.P Gladun, VI
Gorodetsky, G.S Osipov, E.
Popov, V.L Stefanyuk, V.F
Khoroshevsky, V.K . Finn
Tseitin, A. Ehrlich
Artificial Intelligence Association was created. Its
members include more than 300 researchers. The
largest centers - in Moscow, St. Petersburg,
Pereslavl, Novosibirsk
Vladimir Veselov, Yevgeny
Demchenko
Program was developed, which could first pass the
Turing test. Chat bot Eugene Goostman during the
competition Turing Test - 2014 was able to fool
33% of the jury, who felt that they communicate
with the person (30% gain was necessary to pass
the test).
Table 1.1 The periodization of the achievements of Russian scientists in the study of AI
Table 1.1 shows the time periods of researches about Artificial Intelligence by different authors who’s
great impact on this topic are hard to underestimate.
Nowadays, besides the AIA, also "Skolkovo" is an active developer in the field of artificial intelligence
carried out at the innovation center. At present, most attention is paid to the center of robotics. Regular
is carried out conferences - «Skolkovo Robotics”, in which the visitors have an opportunity to
communicate with the robots and their designers. Also, research in the field of artificial intelligence
holds by the company Yandex, which applies its innovations into a search engine. [8]
1.3
Foreign experience
The first work in which the main results were obtained in this direction were made McCulloch and
Pitts. In 1943, their computer model of neural network based on mathematical algorithms and theory
of brain activity has been developed. They hypothesized that neurons can be simplistically viewed as
devices that operate on binary numbers, and termed this model "threshold logic." Like its prototype
biological neurons McCulloch-Pitts were capable of learning by adjusting the parameters describing
the synaptic conductance. Researchers have proposed the construction of a network of electronic
neurons and showed that such a network can do almost anything imaginable numeric or logical
operations. McCulloch and Pitts have suggested that such a network is also able to learn, recognize
patterns, generalize, t. E. Has all the features of intelligence. [9]
11
Further we will combine the decades of researches trough different stages to sum up the main
important things in this science field. Conditionally possible to allocate 7 stages in the development of
artificial intelligence, each of which is associated with a certain level of development of artificial
intelligence and the paradigm being implemented in a particular system.
Stage 1 (50s) (neurons and neural networks)
It is associated with the first consecutive action machines with very small by today's standards,
resource capacity memory, speed and class tasks. It was a purely computational problem for which
solutions were known circuits and which can be described in some formal language. To the same class
belong and challenges for adaptation.
Stage 2 (60s) (heuristics)
The "intelligence" of the machine added search engines, sorting, simple operations for the compilation
of information, do not depend on the meaning of the data being processed. It has become a new
starting point in the development and understanding of the challenges of automation of human activity.
Stage 3 (70s) (Knowledge Representation)
Scientists had recognized the importance of knowledge (in scope and content) for the synthesis of
interesting algorithms for solving problems. In this meaning the knowledge with which mathematics is
not able to work, i.e., expert knowledge, not of a strictly formal nature and is usually described in a
declarative form. It is the knowledge of experts in various fields of activity, doctors, chemists,
researchers, etc. Such knowledge were called expertise, and thus the system operating on the basis of
expertise, became known as systems consultants and expert systems
Stage 4 (80s) (Learning machine)
The fourth stage of ischemic stroke was a breakthrough. With the advent of expert systems in the
world started a new stage of development of smart technologies - the era of intelligent systems consultants who offers solutions, justify them, were able to learn and to develop, communicate with a
person at his usual, although limited, natural language .
Stage 5 (90s) (Automated machining centers)
The increasing complexity of communications and tasks require a new level of "intelligence"
providing software systems, systems such as protection against unauthorized access, information
security resources, protection against attacks, semantic analysis and search for information in
networks, etc. And the new paradigm of the development of advanced systems of protection of all
types of steel intelligent systems. They allow you to create a flexible environment in which the
solution provided all the required tasks.
Stage 6 (2000s) (Robotics)
12
Scope of robots is quite wide and extends from the autonomous lawn mowers and vacuum cleaners to
modern models of military and space technology. Models equipped with the navigation system and all
kinds of peripheral sensors.
Stage 7 (2008) (Singularity)
Creating artificial intelligence and self-replicating machines, the integration of human with computers,
a significant stepwise increase in capacity of the human brain at the expense of biotechnology.
Table 1.2. Periodization of foreign achievements in Artificial Intelligence research
Period
Authors
Achievement
1941
Konrad Zuse
Built the first working program-controlled computer.
1943
Warren McCulloch
and Walter Pitts
Published «A Logical Calculus of the Ideas Immanent in
Nervous Activity”, computer neural network model was
developed based on mathematical algorithms and theory of
brain activity. Introduced the artificial neuron term. Like its
biological prototype McCulloch-Pitts neurons were capable of
learning by adjusting the parameters, describing the synaptic
conductance.
1949
Donald Hebb
In his paper «The organization of behavior”, 1949, described the
main principles of neuron learning.
1950s
Frank Rosenblatt
1950s
Alan Turing
Writes article under the name "Can Machines Think?", which
describes the procedure by which it will be possible to
determine the moment when the machine will be equal in terms
of intelligence with a man, called the Turing test.
1956
John McCarthy and
Scientists from
Dartmouth College
The term "artificial intelligence" - (AI - artificial intelligence)
was proposed at the workshop of the same name in Dartmouth
College (USA).
1957-1958
Frank Rosenblatt
Proposed a model of the electronic device, which was supposed
to simulate the processes human thinking, and the first operating
machine was demonstrated two years later, which could learn to
recognize some of the letters, written on cards, which tray to its
"eyes", reminding movie camera.
1970s
1980s
1997
2006
Invention of the perceptron network and associated learning
rule, which demonstrated its ability to perform pattern
recognition
Edward Shortliffe
An exert system, which is based on medical data could identify
Bruce Buchanan,
the disease and could calculate the required dose of antibiotics
Stanley N. Cohen
John Hopfield, David 1)The use of statistical mechanics to explain the operation of a
Rumelhart and James certain class of recurrent network, which could be used as an
McClelland
associative memory
Supercomputer Deep
Blue
2)The back propagation algorithm for training multilayer
perceptron networks
Deep Blue won a World Champion - Harry Kasparov in chess
Were introduced deep learning algorithms for uncontrolled
training neural networks with one or more layers
Table 1.2. Periodization of foreign achievements in Artificial Intelligence research
13
In the Table 1.2 we can clearly observe the main classic papers in this field, by foreign authors and
stages of development of Artificial Intelligence through years.
In recent years, interest in artificial intelligence systems has increased significantly. At the same time,
the level of development of modern technologies allows you to create a system, only adding
intelligence into our lives (autopilot system, robot vacuum cleaner, washing machine with an odd
logic, etc.), not reproducing human intelligence fully.
The development of artificial intelligence are engaged in many of the largest IT-companies such as
Google, Facebook and Microsoft, which lay out the results of their research in open access. In
addition, in recent years the market has seen quite a number of start-ups. However, as a rule,
companies like lightning giants buy successful startups. For example, the Google company recently
purchased a startup DeepMind fabulous $ 500 million. As practice shows, in promising startups invest
considerable financial means. For Zuckerberg, Musk (and Ashton Kutcher) has recently invested about
$ 40 million in Vicarious company that taught the computer to understand the CAPTCHA.
We can confidently say that the scope of artificial intelligence today is experiencing a real upturn. At
Stanford online course, which was held in 2013 and was dedicated to artificial intelligence, it recorded
more than 150 000 people. More recently, TED has announced a competition to develop (a device with
artificial intelligence) that is able to adequately perform with his speech at their conferences.
Every two years, leading scientists and researchers in the field of artificial intelligence are collected at
the international conference IJCAI, which analyzes the current developments in this area and discussed
further prospects.
14
Chapter 2.
Methodogical approach of building and learning neural network and it’s
application in oil and gas industry
2.1
General principles of building neural networks
The first attempt to create and study of artificial neural networks is considered to be the work of
J. McCulloch and W. Pitts «A logical calculus of the ideas related to the nervous activity" (1943),
which formulated the basic principles of neurons and artificial neural networks. Although this work
was only a first step, many of the ideas described in it, remain relevant today.
Artificial neural networks induced by biology because they are made up of elements, features which
are similar to most of the functions of the biological neuron. These elements can be arranged in a
manner that may correspond to the anatomy of the brain, and they show a large number of properties
that are inherent in the brain. For example, they can learn from experience, can generalize the previous
precedents on new cases and identify the essential features of the input data, which contain surplus
information.
Neural networks are a set of parallel calculations, consisting of a plurality of interacting simple
processes. Each simple calculation takes place in the neuron. Neuron is a simple element, consisting of
synapses (inputs and outputs) and the body of the neuron, which occur computation. [14]
It is similar to the brain in 2 ways:
1. The knowledge acquired during the training network;
2. To maintain knowledge is used power of interneuron connections, called as synaptic weights.
An artificial network consists of a pool of simple processing units, which communicate by sending
signals to each other over a large number of weighted connections.
A set of major aspects of a parallel-distributed model can be distinguished (cf. Rumelhart and
McClelland, 1986 (McClelland and Rumelhart, 1986; Rumelhart & McClelland, 1986)):
a set of processing units ( “neurons”, “cells”);
a state of activation 𝓎𝑘 for every unit which equivalent to the output of the unit;
Connections between the units. Generally each connection is defined by a weight 𝜔𝑗𝑘 , which
determines the effect which the signal of unit j has on unit k;
a propagation rule which determines the effective input 𝑠𝑘 of a unit from its external inputs;
an activation function 𝐹𝑘 , which determines the new level of activation based on the effective
input 𝑠𝑘 (𝑡) and the current activation 𝑦𝑘 (𝑡) (i.e., the update);
an external input (aka bias, offset) 𝜃𝑘 for each unit;
a method for information gathering (the learning rule);
15
an environment within which the system must operate, providing input and if necessary error signals.
Figure 2.1. The basic components of an artificial neural network. The propagation rule used here is the
‘standard’ weighted summation.
2.1.1 Processing units
Each unit performs a relatively simple job receive input from neighbors or external sources and
use this to compute an output signal which is propagated to other units. Apart from this processing a
second task is the adjustment of the weights. The system is inherently parallel in the sense that many
units can carry out their computations at the same time
Within neural systems it is useful to distinguish three types of units: input units (indicated by an index
i) which receive data from outside the neural network, output units (indicated by an index o) which
send data out of the neural network and hidden units indicated by an index h whose input and output
signals remain within the neural network. [15]
During operation units can be updated either synchronously or asynchronously. With synchronous up
dating all units up date their activation simultaneously; with asynchronous up dating, each unit has a
(usually fixed) probability of up dating its activation at a time t and usually only one unit will be able
to do this at a time. In some cases the latter model has some advantages.
2.1.2. Connections between units
In most cases we assume that each unit provides an additive contribution to the input of the unit
with which it is connected. The total input to unit k is simply the weighted sum of the separate outputs
from each of the connected units plus a bias or offset term 𝜃𝑘 :
𝑠𝑘 (𝑡) = ∑ 𝑤𝑗𝑘 (𝑡)𝑦𝑗 (𝑡) + 𝜃𝑘 (𝑡).
𝑗
(2.1)
16
A different propagation rule introduced by Feldman and Ballard (Feldman & Ballard, 1982),
is
known as the propagation rule for the sigma-pi unit:
𝑠𝑘 (𝑡) = ∑ 𝑤𝑗𝑘 (𝑡) ∏ 𝑦𝑗𝑚 (𝑡) + 𝜃𝑘 (𝑡).
𝑗
(2.2)
𝑚
Often the 𝑦𝑗𝑚 are weighted before multiplication. Although these units are not frequently used, they
have their value for gating of input, as well as implementation of lookup tables (Mel, 1990).
2.1.3. Activation and output rules
We also need a rule, which gives the effect of the total input on the activation of the unit. We
need function 𝐹𝑘 , which takes the total input 𝑠𝑘 (𝑡) and the current activation 𝑦𝑘 (𝑡) and produces a new
value of the activation of the unit k:
𝑦𝑘 (𝑡 + 1) = 𝐹𝑘 (𝑦𝑘 (𝑡), 𝑠𝑘 (𝑡)).
(2.3)
There are a lot of activation functions, lets list the most common:
Linear function: neuron output is equal to its potential;
Step function: neuron assumes the value 0 or 1 depending on the 𝑠𝑘 (𝑡) and step size;
Linear with saturating: linear transformation on the interval between the two values A and B (in
other intervals is equal to 0);
Multithreshold: output value takes values q, q-1 defined by steps;
Sigmoid (S-shape) function: specifies the logistics functions
𝑦𝑘 = 𝐹(𝑠𝑘 ) =
1
1 + 𝑒 −𝑠𝑘
(2.4)
Sometimes the hyperbolic tangent is used, yielding output values in the range [-1, +1].
Gauss function:
𝑦𝑘 = 𝐹(𝑠𝑘 ) =
1
√2𝜋𝜎
𝑒
(𝑥−𝑎)2
2𝜎 2
(2.5)
Combinations of neurons are forming the neural network. There are many different types of networks;
they can be classified by the following features [14]:
17
• Structure relations
• Rule of signal propagation in the network
• The right combination of incoming signals
• Rules for calculating activity signal
• Learning rule
Here are a few types of networks that solve the main problem: the multilayer perceptron or a multilayer network with a direct link (MLP), a network with radial basis functions, self-organizing features
of the map (SOFM - Kohonen network), discrete network Hopfield bidirectional associative memory
(BAM), recurrent network, Boltzmann machine, probabilistic neural network (PNN), a modular neural
network (BP-SOM). The task of forecasting values solves the multi-layer perceptron network with
radial basis functions. The remaining tasks solve the problem of classification (SOFM, PNN) or
pattern recognition.
In terms of architecture, the NA can be considered as a directed graph with weighted connections in
which artificial neurons are nodes. According to the architecture of the National Assembly relations
can be grouped into two classes (Figure 2): Direct distribution network, which graphs have no loops,
and recurrent networks, or networks with feedback. The most common family of networking first
class, called Multilayer Perceptron; neurons are arranged in layers and have a one-way communication
between the layers.
Figure 2.2 Architecture of neural networks
18
Fig. 2.2 shows typical networks of each class. Direct distribution networks are static in the sense that a
given input, they produce a set of output values, not dependent on the previous state of the network.
Recurrent networks are dynamic, since by virtue of the feedback inputs to their modified neurons,
leading to a change in network state. [23]
As for this pattern of connections, the main distinction we can make is between:
▪ Feed-forward neural networks, where the data from input to output units is strictly feed forward. The
data processing can extend over multiple (layers of) units, but no feedback connections are present,
that is, connections extending from outputs of units to inputs of units in the same layer or previous
layers.
▪ Recurrent neural networks that do contain feedback connections. Contrary to feed-forward networks,
the dynamical properties of the network are important. In some cases, the activation values of the units
undergo a relaxation process such that the neural network will evolve to a stable state in which these
activations do not change anymore. In other applications, he changes of the activation values of the
output neurons are significant, such that the dynamical behavior constitutes the output of the neural
network. [20,28]
Nowadays exist a huge amount of theoretical knowledge about neural networks. In order to classify all
information, the short categorization of paradigm were created and represented in Table 2.1
Table 2.1 Classification of main neural network paradigm
Name of the neuroparadigm
Single layer perceptron
Authors
R. Rosenblatt
Year
1959
Back Propagation
1960-е
Counter Propagation
R. Rosenblatt, M.
Minsky, S. Papert
R. Hecht-Nielsen
Instar Network
Outstar Network
Artificial resonance-1 (ART-1
Network)
Hopfield Network
S. Grossberg
S. Grossberg
S. Grossberg, G.
Carpenter
J.J. Hopfield
1974
1974
1986
Hamming Network
R. W.Hamming
1987
Kohonen Network
T. Kohonen
1984
maximum search network
(MAXNET)
R.P. Lippman
1987
maximum search network with
direct links (Feed-Forward
MAXNET)
Bi-directional auto-associative
memory (BAM Network)
Boltzman machine
R.P. Lippman
1987
B. Kosko
Second
half of 80
1985
J. Hinton, T.
1986
1982
Area of application
Pattern recognition, classification /
categorization
Pattern recognition, classification,
prediction
Pattern recognition, image restoration
(associative memory), data compression
Pattern recognition
Pattern recognition
Pattern recognition, cluster analysis
Search and recovery of data on their
fragments
Pattern recognition, classification,
associative memory, reliable transmission
of signals in noisy environments
Cluster analysis, pattern recognition,
classification
Together with the Hamming network, as
part of Neural Network pattern
recognition systems
Together with the Hamming network, as
part of Neural Network pattern
recognition systems
Associative memory, pattern recognition
Image recognition, radar signals, sonar
19
Neural Gaussian Classifier
Genetic training algorithm
Sejnovsky, H. Szu
R.P. Lippman
J.Holland,
D.Goldberg
2.2
1987
1975
1988
Pattern recognition, classification
Training neural networks recognition
sonar signals
Training of artificial neural networks
The ability to learn is a fundamental property of the brain. In the context of the ANN training
process has the following definition.
Neural network training – is a set up the network architecture and the weights for the effective
implementation of the special task. The purpose of training is the selection of synaptic coefficients,
which would allow solving the task satisfactorily.
Various methods to set the strengths of the connections exist. One way is to set the weights explicitly
using a priori knowledge. Another way is to train the neural network by feeding it teaching patterns
and letting it change its weights according to some learning rule. [15]
There is the classification of learning situations in two distinct sorts. These are:
Supervised learning or Associative in which the network is trained by providing it with input and
matching output patterns. These input output pairs can be provided by an external teacher, or by
the system, which contains the network (self supervised).
Unsupervised learning or Self-organization in which an (output) unit is trained to respond to
clusters of pattern within the input. In this paradigm the systems is supposed to discover
statistically salient features of input population. Unlike the supervised learning paradigm, there is
no a priori set of categories into which the patterns are to be classified; rather the system must
develop its own representation of the input stimuli.
2.2.1. Modifying patterns of connectivity
Both learning paradigms discussed above result in an adjustment of the weights of the
connections between units according to some modification rule. Virtually all learning rules for models
of this type can be considered as a variant of the Hebbian learning rule suggested by Hebb in his
classic book Organization of Behaviour (1949) (Hebb, 1949). The basic idea is that if two units j and
k are active simultaneously, their interconnection must be strengthened. If j receives input from k, the
simplest version of Hebbian learning prescribes to modify the weight with:
∆𝑤𝑗𝑘 = 𝛾𝑦𝑗 𝑦𝑘
(2.6)
,where 𝛾 is a positive constant of proportionality representing the learning rate. Another common
rule uses not the actual activation of unit k but the difference between the actual and desired activation
for adjusting the weights:
20
∆𝑤𝑗𝑘 = 𝛾𝑦𝑗 (𝑑𝑘 − 𝑦𝑘 ),
(2.7)
in which 𝑑𝑘 is the desired activation provided by a teacher. This is often called the Widrow-Hoff rule
or the delta rule.
2.3
Application of the neural networks in oil and gas industry
Over the past decade, the use of machine learning, predictive analytics, and other artificial
intelligence-based technologies in the oil and gas industry has grown immensely. These technologies
have advanced over the last 18-24 months as the drop in oil price has driven companies to look for
innovative ways to improve efficiency, reduce costs and minimize unplanned downtime.
Artificial Intelligence is an area of great interest and significance in petroleum exploration and
production. Over the years, it has made an impact in the industry, and the application has continued to
grow within the oil and gas industry. The application in E & P industry has more than 16 years of
history with first application dated 1989, for well log interpretation; drill bit diagnosis using neural
networks and intelligent reservoir simulator interface. It has been propounded in solving many
problems in the oil and gas industry which includes, seismic pattern recognition, reservoir
characterization, permeability and porosity prediction, prediction of PVT properties, drill bits
diagnosis, estimating pressure drop in pipes and wells, optimization of well production, well
performance, portfolio management and general decision making operations and many more. The
successful application of artificial intelligence techniques as related to one of the major aspects of the
oil and gas industry, drilling capturing the level of application and trend in the industry. [16]
Machine learning (ML) has had a slower adoption rate in the oil and gas industry and though there are
many supporters there are also many skeptics about the real value it can bring.
Neural network technology of artificial intelligence having an increasing application in the
development of smart sensors, information processing systems (SDI) in the oil and gas and other
strategically important industries. They allow you to create a neural network model of automation
objects and applied neural network systems, through which significantly facilitates the control of a
technical condition of the oil and gas industry, realized their structural and parametric identification,
carried out with the use of neural networks learning algorithms. [17]
The efficiency of industrial systems in the oil and gas industry, created on the basis of artificial neural
networks is determined:
- The adequacy of the achieved degree of neural network models of automation objects, which largely
depends on the proper choice of structural and functional organization (BOM) used neural networks;
21
- Preliminary data processing quality, realized by neural setya- mi smart sensors and data analyzers;
- The presence of neural network analyzers information processing functions, the need for intelligent
real-time analysis of the data (datamining) [18]
2.4
AI Applications in Drilling System Design and Operations
One of the major aspects of the oil and gas industry is drilling operational section. The drilling
industry is a technology dependent industry. It is considered to be the most expensive operations in the
world as they require huge expenses to be spent daily. Therefore, any sorts of tools that can improve
the drilling operation at a minimal cost are essential and demanded during pre and post planning
process of any activity. The number of publications on the application of AI in drilling operations
indicates that this is a potential methodology to re- duce drilling cost, increase drilling operation
safety, by using previous experiences hidden in reports or known by experts.
The complexity of the drilling operations and the unpredictable operating conditions (uncertain- ties
regarding tool/sensor response, device calibra- tion and material degradation in extreme downhole
pressure, temperature and corrosive conditions) may sometimes result in non-accurate drilling data,
hereby misleading the driller about the actual down- hole situation. The indulgence of smart decisionmaking models and optimized real-time controllers in the drilling system can also, therefore, provide
the driller with a number of quick and intelligent propositions on key drilling parameters and on suitable preventive or corrective measures intended to bring the conditions back to an optimum drilling
stage (Dashevskiy et. al, 1999).
2.5
AI in well planning operations
Designing a well for safer and faster operations and economic budget requires complex and
experience- based decision-making. Chief input information sources for an efficient well plan are
normally offset well data, reservoir models and drilling simulation results. AI has been tested in
different well plan- ning phases by experts all over the world. Figure 3 shows some potential prospects
as related to well planning.
22
Drill bit selection
Mud and fructure gradient
prediction
AI in well
planning
Casing shoe depth and
collapse pressure
determination
Oilfield cement quality and
performance estimation
Platform selection (offshore)
Trajectory and directional
mapping
Figure 3.1. Potential applications of AI in well planning sector
Selection of drill bits as per formation characteristics has been one of the most prospective sec- tors
benefiting by the application of AI (figure 3.1). Trained artificial neural networks (ANNs) have been
an important tool for decoding data, categorizing the empirical relationships and optimized bit
selection based on user defined information database. The database may include IADC bit codes for
typical rock formations, rock strength data, geology, compaction characteristics and conventional ROP
values corresponding to the rocks. Hence after the user input on the data, the ANNs have the ability to
correctly learn the codes and numerical values and select the suitable bit for a particular drilling
environment, whether it’s a PDC, roller cone, diamond insert or a hybrid.
Input data
1)Geological data
Carbonate, salt, Sandstones,
Grain size
2)Rocks Mechanics
Rock streangth, Friction angle,
Vibration impact
3) Well drilling data
ROP, Layer Thickness,
Formation top.
4) Location data
Formation name, Age, loation
ANN
User input for new
variable data and
ANN learning
Output
Bit type
Performance prediction
Operating guidlines
Figure 3.2. Base layout for drill bit selection by ANNs (National Oilwell Varco, 2013)
Casing collapse occurrence and depth determination can also take neural network approach using a
23
simple spreadsheet program with BPNN basis. As previously used in Middle-east and Asian countries,
a back-propagating network of user-defined number of internal (hidden) layers can be connected to
input and output layers to provide an ’experienced’ estimate on casing collapse depth for the wells to
be drilled. The data layer can have a number of inputs such as location, depth, pore pressure, corrosion
rate, casing strength etc. to analyze and pro- vide feed on expected collapse depth and probability of
casing collapse (e.g. in years). [19]
Input data
1)Total well depth
2) Corrosion weight factor
3) Failure time factor
4) Zone factor
5) casing grade and streanth
6)Latitude and longitude of
well
7)Geiligical anomalies
Output
BPNN
(2-3 layers)
Collapse depth
Probability of
collapse in 5 years
(0 to 1)
Figure 3.3. Base layout for casing collapse and depth prediction by BPNNs
A basic layout of the method is provided in figure 5. However, the approach is relatively new and still
needs further up gradation on its result accuracy and generalization of input data.
Among the broader applications of AI methods, ANN, BPNN are widely used in drilling practices.
Table 3.1 consists from a list of application of AI techniques and their purposes since their emergence
in drilling operation area of oil and gas. [24-26,29,30]
Table 3.1. Timeline AI techniques application in drilling practices
Drilling Sector
Application
AI Approach
Researcher(s)
Year
Well Planning
Bit selection
ANN
National Oilwell Varco
2013
Gradient prediction
GRNN
Sadiq and Nashwi
2000
Casing collapse prediction
BPNN
Salehi, Hareland, Ganji,
Keivana and Abdollahi
2007
Cement quality / Performance
estimation
ANN
Fletcher, Coveney and
Methven
1994
Offshore platform selection
Hybrid (BPNNGA)
Wang, Duan, Liu and Dong
2011
24
Directional map- ping
CBR
Mendes, Guilherme and
Morooka
2011
BHA monitoring
ANN
Dashevskiy, Dubinsky and
Macpherson
1999
Bit wear control
ANN
Gidh, Purwanto and
Ibrahim
2012
Drag and slack-off load
prediction
ANN
Sadiq and Gharbi
1998
DS vibration con- trol
ANN
Esmaili, Elahifar,
Thonhauser and Fruhwirth
2012
Hole cleaning effi- ciency
estimation
BPNN/MLR
Rooki, Ardejani and
Moradzadeh
2014
Well Stability
Kick, loss, leakage
monitoring
ANN
Jahanbakhshi and
Keshavarzi
2012
Problem- Solving
Stuck-pipe control and
corrective mea- sures
BPNN / (ANNGA) Hybrid
Shadizadeh, Karimi and
Zoveidavianpoor
2010
Pattern
Recognition
Real-time drilling risk
FR / CBR
Lian, Zhou, Zhao and Hou
2010
Drilling equipment condition
ANN
Yamaliev, Imaeva and
Salakhov
2009
Determination of feasible
drilling procedure as per
drilling conditions
CBR
Popa, Malma and Hicks
2008
Procedural
Optimization
Critical Decision
Making
The main goal of seeking smart machine methods is to predict the occurrence of some problems based
on previous experience with reasonable cost and time. The reliability of the method depends on the
accuracy of prediction and the error between the actual and the predicted class labels of the problem.
According to Kecman (2001), many scientific and engineering fields have recently applied artificial
intelligence to predict common and serious problems. They seek AI methods due to the complications
of most today problems, which are hard to be solved through the traditional methods, or what is called
hard computing.
The benefits of AI techniques are highlighted as follows (after [21]; Medsker 1996; Medsker 1996; Tu
25
1996. and Benghanem 2012):
▪ The leverage AI techniques has over other modeling techniques is their ability to model complex;
non-linear processes without any form of relationship assumption between input and out- put
variables. ▪ As a developing and promising technology, AI has become extremely popular for prediction,
diagnosis, monitoring, selection, forecasting, Inspection and identification in different fields. ▪ AI are more accurate than other models and empirical models for predictions using linear or nonlinear multiple regression or graphical techniques. ▪ AI has a great potential for generating accurate analysis and results from large historical databases.
The kind of data that most engineers may not consider valuable or relevant in conventional
modeling and analysis processes. ▪ AI tools have the ability to analyze large quantities of data to establish patterns and characteristics in
situations where rules are not known and sometimes in many cases make sense of incomplete
or noisy data ▪ AI tools are cost effective. ANN as example has the advantage of execution speed, once the network has been trained. The ability to train the system with data sets, rather than having to write
programs, may be more cost effective and may be more convenient when changes become
vital. ▪ AI tools can implicitly detect complex nonlinear relationships between independent and de- pendent
variables. ▪ AI tools can be developed using multiple different training algorithms ▪ Tackle boring tasks and can complete task faster than a human with less errors and defects Like any other tool, AI techniques have their own limitations. An example is ANN, which is often
tagged as black boxes that merely attempt to map a relationship between output and input variables
based on a training data set. This raises some concerns regarding the ability of the tool to generalize to
situations that were not well represented in the data set (Lint et al., 2002). However one proposed
solution in addressing the black box problem is the combination of multiple AI paradigms into a
hybrid solution (e.g., combining neural networks and fuzzy sets into neuro-fuzzy systems) or
integrating AI tools with more traditional solution techniques.
26
Chapter 3.
The application of neural network for the predicting CO2 emissions from energy
consumption
3.1
Characteristic of factors that influence CO2 emissions
One of the main pollutants of the atmosphere is carbon dioxide. In the XX century. There is an
increase in the concentration of CO2 in the atmosphere, the share of which has increased by almost
25% since the beginning of the century, and by 13% over the past 10 years. The release of CO2 into
the environment is inextricably linked with the consumption and production of energy.
Environmentalists warn that if it is not possible to reduce the release of carbon dioxide into the
atmosphere, then our planet expects a catastrophe, associated with an increase in temperature due to
the so-called greenhouse effect.
The essence of this phenomenon is that ultraviolet solar radiation passes through the atmosphere with a
high content of CO2 and methane CH4 rather freely. Reflected from the surface, infrared rays are
delayed by an atmosphere with a high content of CO2, which leads to an increase in temperature, and
consequently, to climate change.
Anthropogenic sources of CO2 emissions to the atmosphere include: burning of fossil and non-fossil
energy sources for obtaining heat, generating electricity, transporting people and cargo. Some types of
industrial activity, such as, for example, cement production and gas utilization through their flaring,
lead to significant CO2 emissions.
With the onset of the industrial revolution in the middle of the 19th century, anthropogenic emissions
of carbon dioxide into the atmosphere progressively increased, which led to a disruption in the carbon
cycle balance and an increase in the concentration of CO2.
The main sources of CO2emissions in the United States are described below:
1) Electricity. Electricity is a significant source of energy in the United States and is used to power
homes, business, and industry. In 2015 the combustion of fossil fuels to generate electricity was the
largest single source of CO2 emissions in the nation, accounting for about 35 percent of total U.S. CO2
emissions and 29 percent of total U.S. greenhouse gas emissions. The type of fossil fuel used to
generate electricity will emit different amounts of CO2. To produce a given amount of electricity,
burning coal will produce more CO2 than oil or natural gas.
2) Transportation. The combustion of fossil fuels such as gasoline and diesel to transport people and
goods was the second largest source of CO2 emissions in 2015, accounting for about 32 percent of
total U.S. CO2 emissions and 26 percent of total U.S. greenhouse gas emissions. This category
includes transportation sources such as highway vehicles, air travel, marine transportation, and rail.
27
3) Industry. Many industrial processes emit CO2 through fossil fuel combustion. Several processes
also produce CO2 emissions through chemical reactions that do not involve combustion; for example,
the production and consumption of mineral products such as cement, the production of metals such as
iron and steel, and the production of chemicals. Fossil fuel combustion from various industrial
processes accounted for about 15 percent of total U.S. CO2 emissions and 12 percent of total U.S.
greenhouse gas emissions in 2015. Note that many industrial processes also use electricity and
therefore indirectly cause the emissions from the electricity production.
Energy consumption is viewed as the major source of greenhouse emissions [31]. Energy
consumption from 1970–2010 for the Organization of the Petroleum Exporting Countries (OPEC) has
increased by 685%, while the emissions of CO2 increased by 440% as a result of burning fossil fuels
within the same period. Therefore, energy consumption and CO2 emissions of the OPEC countries
have drastically increased [32].
In 2015, the five largest emitting countries and the European Union, which together account for two
thirds of total global emissions, were: China (with a 29% share in the global total), the United States
(14%), the European Union (EU-28) (10%), India (7%), the Russian Federation (5%) and Japan
(3.5%). The 2015 changes within the group of 20 largest economies (G20), together accounting for
82% of total global emissions, varied widely, but, overall, the G20 saw a decrease of 0.5% in CO2
emissions in 2015. [33]
Global temperatures have continued to rise, making 2016 the hottest year on the historical record and
the third consecutive record-breaking year, scientists say. Of the 17 hottest years ever recorded, 16
have now occurred since 2000. The Earth's temperature has risen since record-keeping began in the
19th century. Warming began to accelerate around the 1980s. [34]
An accurate prediction of CO2 emissions can serve as a reference point for an OPEC secretariat to
propagate the reorganization of economic development in member countries with the view of
managing CO2 emissions. Evidence of CO2 emission dangers can easily be used to convince member
countries to embark on economic development that can result to minimal petroleum consumption and
reduced CO2 emissions. In view of the economic implications of reducing CO2 emissions, reduction
of the CO2 emissions in OPEC countries must be enforced with caution. [33]
28
3.2
The proposed model for estimation CO2 emission problem
The neural network structure that used for the carbon estimation is a multi-layer feed forward network.
As explained before the network consists of an input layer with 20 neurons, one hidden layer, and an
output layer. The input layer consists of four inputs data the global oil, natural gas, coal, and primary
energy consumption. The hidden layer function is a nonlinear and consists of 5 neurons. The hidden
units are fully mapped and connected to both the input and output. The activation function of the
hidden units provides the network nonlinearity. The neurons optimal number of the hidden layer was
selected by several trials. The network was trained using the Back Propagation (BP) algorithm. The
number of neurons in hidden layer is selected to be 5. The output layer consists of one output neuron
producing the corresponding carbon emission estimation. The output layer node has a linear activation
function. The ANN developed models is shown in Figure 3.1.
Figure 3.1 Developed Neural network structure
As inputs were chosen variables such as: the global oil, natural gas (NG), coal, and primary energy
(PE) consumption on the CO2 emission estimation. The data were trained since 1965-2010 and tested
since 2010-2015.
The back propagation algorithm can be simply explained and shown from the flow chart in Figure 3.2.
Figure 3.1 Back propagation flow network diagram
29
Different validation criterion were used to find out the percentage of error difference between the
actual and estimated values as shown in Equations:
Manhattan distance
𝑛
𝑀𝐷 = (∑ |𝑦𝑖 − 𝑦̂|
𝑖 )
𝑖=1
Euclidian distance
(3.1)
30
2
𝑛
(3.2)
𝐸𝐷 = √(∑ |𝑦𝑖 − 𝑦̂|
𝑖 )
𝑖=1
Mean magnitude of relative error
𝑁
|𝑦𝑖 − 𝑦̂|
1
𝑖
𝑀𝑀𝑅𝐸 = ∑
𝑁
𝑦𝑖
(3.3)
𝑖=1
,Where y and 𝑦̂ are the actual and estimated values based on the proposed model and N is the number
of measurements used in the experiment, respectively.
Figure 3.1 Neural network’s convergence curve
On this figure we can see the downshift Standard error with the rising numbers of iterations. This
means that the process of neural network’s learning is very effective and network could forecast with a
small amount of errors.
31
Table 3.1 MD, ED, MMRE for ANN model training and testing data for the carbon emission
estimation
Model
MD
ED
MMRE
Training
61,1885
614,214
0,0078
Testing
125,602
606,65
0,0160
The ANN was trained by the back propagation learning algorithm. The proposed ANN model results
show that ANN was capable of producing high estimation capabilities. This is clearly seen from the
obtained results and the shown relationship between the actual and estimated responses.
Table 3.2 Actual and estimated carbon dioxide emission
34000
33000
32000
31000
30000
29000
28000
27000
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
Again, we can see that the ANNs proved its ability in solving the carbon-estimating problem from a
given set of example. In comparison with regression results, neural network shows a greater forecast
quality with less standard error.
32
Conclusion
The author has performed the research work directed on: the investigation of the artificial intelligence
terminology and highlight a summary of various papers and reports associated with artificial
intelligence development and applications; the observation of the theoretical foundations of building
and learning the neural networks; The analysis of the current ways of the application of neural
networks in the oil and gas industry and Providing a solution to forecast the poison CO2 gas emerged
from energy consumption. Were made a conclusion about effectiveness and perspectives of this model
for industry. This goals, allows author to make the following conclusions:
1) Oil and gas industry structure has changed dramatically last years. New market realities with low
oil prices merge energy companies to decrease their costs and increase production efficiency in
order to stay strong in such environment. And one of the right ways to do that is to implement new
technologies to production. The most smart and fast learning technologies today are the artificial
intelligence.
2) Climate Pollution due to the Carbon Emission (CO2) from the different fossil fuels is considered
as a great and important international challenge to many researchers
3) Artificial Intelligence is an area of great interest and significance in petroleum exploration and
production. Over the years, it has made an impact in the industry, and the application has
continued to grow within the oil and gas industry.
4)
The research of the author has allowed revealing that the first practical application of artificial
neural networks came in the late 1950s when Frank Rosenblatt and his colleagues demonstrated their
ability to perform pattern recognition. Then, only after 80s started the new, active stage of the
development of AI, when the computer power reached significant level. In the 1990s, scientists have
made a breakthrough in this area by offering solutions built on the basis of neural networks. The
proposed development quickly was proven to be effective in solving a number of problems, ranging
from the analysis of the bank's customers to pay and ending with the prediction of exchange rates and
the predictions of the presidential election results.
5) AI techniques characteristic include ability to learn from examples; fault tolerant managing noisy
and deficient data; ability to deal with non-linear problems; and for prediction purpose and
generalization at high speed once trained.
6) Artificial neural networks, fuzzy logic and evolutionary algorithms are the most commonly used
AI techniques today in various petroleum engineering applications; oil and gas reservoir
simulation, production, drilling and completion optimization, drilling automation and process
control.
33
7) Application of AI have a lot of advantages such as: time saving, minimizing risk, saving cost,
improving efficiency and solving many optimization problems, also AI has a great potential for
generating accurate analysis and results from large historical databases. 8) The proposed network architecture is able to produce very good estimation results in both training
and testing cases with small number of differences.
9) Predicting CO2 is significant for the adaptation of climate change policies as well as for offering a
reference point for using alternative energy sources with the view to reduce CO2 emissions.
References
1. Hagan, M.T., Demuth, H.B., Beale, M. Neural Network Design, PWS Publishing Company,
34
1996.
2. L.I.Grogorev, O.A.Stepankina ‘artificial intelligence system’. 1998.- 5-15p
3. Turing, A.M. (1950). Computing machinery and intelligence. Mind, 59, 433-460.
4. Korsakov S.N. (1832 g.) Ocherk o novom sposobe issledovaniya posredstvom mashin dlya
sravneniya idey (Per.s frants. A.V. Syromyatina) // Elektronnaya kultura: translyatsiya v
sotsiokulturnoy i obrazovatelnoy srede. Pod red. A.Yu. Alekseeva, S.Yu. Karpuk –
M.:MGUKI, 2009.
5. Korsakov S.N. Nachertanie novogo sposoba issledovaniya pri pomoshchi mashin,
sravnivayushchikh idei / Per. s frants. pod red. A.S. Mikhaylova. – M.: MIFI, 2009.– p.44
6. Pospelov D.A. Formation of computer science in Russia // In .: "Essays on the history of
informatics in Russia" // Compilers - DA Pospelov, YI Fet. - Novosibirsk, Scientific Publishing
Center UIGGM, 1998, p. 7 - 44
7. Gavrilova T.A. ‘Knowledge Base Intelligent Systems / VF Khoroshevsky - St. Petersburg:
Peter, 2000. - 384 p.
8. Artificial Intelligence. History and overview of the market. [Electronic resource]//URL:
http://m.m2mrussianews.ru/material/iskusstvennyj_intellekt_istoriya_razvitiya_i_obzor_rynka
(date of the application 15.09.2016)
9. History of neural networks [Electronic resource]//URL: http://neuronus.com/history/5-istoriyanejronnykh-setej.html (date of the application 10.09.2016)
10. Luhovtsev I.A. The development of artificial intelligence in the XXI century / Youth Science
and Technology Gazette 2015 URL: http://sntbul.bmstu.ru/doc/759750.html
11. Averkin AN, Haase-Rapoport MG, Pospelov DA Glossary of Artificial Intelligence. - Moscow:
Radio and communication, 1992. - 256 p.
12. McCarty J. What is Artificial Intelligence? 2007 Electronic resource/ http://wwwformal.stanford.edu/jmc/whatisai/whatisai.html (Data of application 5.09.2016
13. Development of Artificial Intelligence: Towards Supermind. [Electronic resource]//URL:
http://lpgenerator.ru/blog/2016/05/20/razvitie-iskusstvennogo-intellekta-na-puti-ksverhrazumu/#ixzz4KK7JC2Rw (date of the application 13.09.2016)
14. Callan R. ‘Basic concept of neural networks.. - M .: "Williams" Publishing House, 2001. - 287
p .: silt. - A pair. tit. Eng.
15. Anderson A. James An Introduction to Neural Networks. MIT Press, 1995.-672
16. Application Of Artificial Intelligence Methods In Drilling System Design And Operations: A
Review Of The State Of The Art // Journal of Artificial Intelligence and Soft Computing
Research (May 2015)
17. Smolin DV Introduction to Artificial Intelligence: Lecture notes. M .: FIZMA- TLIT. 208.
35
18. E. Hunt AI = Artificial intelligence; ed. VL Stefaniuk. M .: Mir, 1978. 558 p.
19. Opeyemi Bello, Javier Holzmann, Tanveer Yaqoob, Catalin Teodoriu «Application of artificial
intelligence methods in drilling system design and operations: a review of the state of the art»
20. Pearlmutter A Dynamic Recurrent Neural Networks Barak A December 1990.- 24-28p
21. Melit A. ‘Artificial Intelligence technique for modeling and forecasting of solar radiation data:
a review’ // International Journal of Artificial Intelligence and Soft Computing react-text: 50
1(1):52-76/ November 2008
22. Fausett L. Fundamentals of Neural Networks: Architectures, Algorithms, and Applications.
Prentice Hall PTR1994.- 300
23. F. Wasserman. Neurocomputing equipment: theory and practice. Yuri Zuev, VA Tochenov
1992.-35
24. Ramgulam Asha 'Utilization of artificial neural networks in the optimization of history
matching» // Petroleum and natural gas ingeneering. 2006.- 21-25p
25. Gheyas I.A., Smith L.S. A Neural Network Approach to Time Series Forecasting //
Proceedings of the World Congress on Engineering, London, 2009, Vol 2 [электронный
ресурс]. P. 1292 – 1296. URL: www.iaeng.org
26. Smith. M. Neural Networks for statistical Modeling, New York: Van Nostrand Reinhold, 1993.
27. Dayhoff E. Judith Neural network architectures, 1991. -259
28. Chauvin, Y. A back-propagation algorithm with optimal use of hidden units 1989,-34p
29. Patrick Wong, Fred Aminzadeh, Masoud Nikravesh ‘Soft computing for reservoir
characterization and modeling’ 2002.-35p.
30. Mohaghegh, Shahab D; Khazaeni, Yasaman Application of AI in the upstream oil and gas
industry// International Journal of Computer Research Close:block:publicationBlock 18.3/4
(2011): 231-267.
31. Sari R, Soytas U (2009) Are global warming and economic growth compatible? Evidence from
five OPEC countries?. Applied Energy, 86(10) 1887–1893 p
32. Adetutu MO (2014) Energy efficiency and capital-energy substitutability: Evidence from four
OPEC countries. Applied Energy 119 363–370 p
33. Olivier J.G.J., Janssens-Maenhout G., Muntean M. and Peters J.A.H.W. (2016), Trends in
global CO2 emissions; 2016 Report, The Hague: PBL Netherlands Environmental Assessment
Agency; Ispra: European Commission, Joint Research Centre 5-6p
34. Jugal K. Patel (2017) How 2016 Became Earth’s Hottest Year on Record, New York Tim