University of Rennes 1
Accreditation to Direct Research
Field: Computer Science
— Thesis —
Virtual Reality
— in virtuo autonomy —
Jacques Tisseau
Defense: December 6, 2001
Herman
Bourdon
Coiffet
Thalmann
Abgrall
Arnaldi
D.
F.
P.
D.
J.F.
B.
Chairperson
Reporter
Reporter
Reporter
Examiner
Examiner
2
Contents
Contents
3
List of Figures
5
1 Introduction
1.1 An Epistemological Obstacle . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 An Original Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
7
8
2 Concepts
2.1 Introduction . . . . . . . . . . . . . . . . . .
2.2 Historical Context . . . . . . . . . . . . . .
2.2.1 Constructing Images . . . . . . . . .
2.2.2 Animating Images . . . . . . . . . .
2.2.3 Image Appropriation . . . . . . . . .
2.2.4 From Real Images to Virtual Objects
2.3 The Reality of Philosophers . . . . . . . . .
2.3.1 Sensory/Intelligible Duality . . . . .
2.3.2 Virtual Versus Actual . . . . . . . .
2.3.3 Philosophy and Virtual Reality . . .
2.4 The Virtual of Physicists . . . . . . . . . . .
2.4.1 Virtual Images . . . . . . . . . . . .
2.4.2 Image Perception . . . . . . . . . . .
2.4.3 Physics and Virtual Reality . . . . .
2.5 Principle of Autonomy . . . . . . . . . . . .
2.5.1 Operating Models . . . . . . . . . . .
2.5.2 User Modeling . . . . . . . . . . . . .
2.5.3 Autonomizing Models . . . . . . . .
2.5.4 Autonomy and Virtual Reality . . . .
2.6 Conclusion . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
11
11
11
12
13
15
16
17
18
19
21
22
22
25
26
26
27
28
29
31
33
3 Models
35
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2 Autonomous Entities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3
Contents
3.3
3.4
3.5
3.2.1 Multi-Agent Approach . . . . . . . .
3.2.2 Simulation multi-agents participative
3.2.3 Modélisation comportementale . . . .
Collective Auto-Organization . . . . . . . .
3.3.1 Simulations in Biology . . . . . . . .
3.3.2 Coagulation Example . . . . . . . . .
3.3.3 In Vivo, In Vitro, In Virtuo . . . . .
Individual Perception . . . . . . . . . . . . .
3.4.1 Sensation and Perception . . . . . . .
3.4.2 Active Perception . . . . . . . . . . .
3.4.3 The Shepherd, His Dog and His Herd
Conclusion . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4 Tools
4.1 Introduction . . . . . . . . . . . . . . . . . . . . .
4.2 The oRis Language . . . . . . . . . . . . . . . . .
4.2.1 General View . . . . . . . . . . . . . . . .
4.2.2 Objects and Agents . . . . . . . . . . . . .
4.2.3 Interactions between Agents . . . . . . . .
4.3 The oRis Simulator . . . . . . . . . . . . . . . . .
4.3.1 Execution Flows . . . . . . . . . . . . . .
4.3.2 Activation Procedures . . . . . . . . . . .
4.3.3 The Dynamicity of a Multi-Agent System
4.4 Applications . . . . . . . . . . . . . . . . . . . . .
4.4.1 Participatory Multi-Agent Simulations . .
4.4.2 Application Fields . . . . . . . . . . . . .
4.4.3 The ARéVi Platform . . . . . . . . . . . .
4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
36
37
39
40
40
43
47
49
50
54
55
57
.
.
.
.
.
.
.
.
.
.
.
.
.
.
61
61
61
62
64
68
70
71
72
75
77
77
79
80
81
5 Conclusion
83
5.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
5.2 Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Bibliography
4
87
List of Figures
1.1
Pinocchio Metaphor and the Autonomization of Models . . . . . . . . . . . . .
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
2.10
9
Reality and the Main Philosophical Doctrines . . . . . . . . . . .
The Four Modes of Existence . . . . . . . . . . . . . . . . . . . .
The Three Mediations of The Real . . . . . . . . . . . . . . . . .
Optical System . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Real Image (a) and Virtual Image (b) . . . . . . . . . . . . . . . .
Virtual Image (a), Real Image (b), and Virtual Object (c) . . . .
Illusions of Rectilinear Propagation and Opposite Return . . . . .
The Three Mediations of Models in Virtual Reality . . . . . . . .
The Different Levels of Interaction in Virtual Reality . . . . . . .
User Status in Scientific Simulation (a), in Interactive Simulation
Virtual Reality (c) . . . . . . . . . . . . . . . . . . . . . . . . . .
2.11 Autonomy and Virtual Reality . . . . . . . . . . . . . . . . . . . .
. .
. .
. .
. .
. .
. .
. .
. .
. .
(b)
. .
. .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
and in
. . . .
. . . .
.
.
.
.
.
.
.
.
.
3.1
3.2
3.3
3.4
3.5
3.6
3.7
3.8
3.9
3.10
3.11
3.12
Cell-Agent Design . . . . . . . . . . . . . . . . . . . . . .
Interactions of the Coagulation Model . . . . . . . . . .
Evolution of the Multi-Agent Simulation of Coagulation
Main Results of the Multi-Agent Coagulation Model . .
Modeling and Understanding Phenomena . . . . . . . . .
Example of a Cognitive Map . . . . . . . . . . . . . . . .
Definition of a Fuzzy Cognitive Map . . . . . . . . . . .
Flight Behavior . . . . . . . . . . . . . . . . . . . . . . .
Flight Speed . . . . . . . . . . . . . . . . . . . . . . . . .
Perception of Self and Others . . . . . . . . . . . . . . .
Fear and Socialization . . . . . . . . . . . . . . . . . . .
The Sheepdog and Its Sheep . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
44
45
46
47
50
51
52
53
53
56
57
58
4.1
4.2
4.3
4.4
4.5
4.6
Execution of a Simple Program in oRis
oRis Graphics Environment . . . . . .
Localization and Perception in 2D . . .
Splitting an Execution Flow in oRis . .
Structure of the oRis Virtual Machine
Modifying an Instance . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
63
64
66
71
72
76
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
18
20
22
23
23
24
25
27
28
. 30
. 32
5
List of Figures
4.7
4.8
6
An ARéVi Application: Training Platform for French Emergency Services . . . 81
Metaphor of Ali Baba and Programming Paradigms . . . . . . . . . . . . . . . 82
Chapter 1
Introduction
Created from two words with opposing meanings, the expression virtual realities is absurd.
If someone were to talk to you about the color black-white, you would probably think that,
because this person was combining a word with its opposite, he was confused. Certainly
though, in all accuracy, virtual and reality are not opposites. Virtual, from Latin virtus
(vertue, or force), is that which is in effect in the real, that which in itself has all the basic
conditions for its realization; but then, if something contains the conditions of its realization
in itself, what can truly be a reality? When looked at from this perspective, the expression is
inadequate. [Cadoz 94a]
The English expression virtual reality was coined for the first time in July of 1989
during a professional trade show1 by Jaron Lanier, founder and former CEO of the VPL
Research company specializing in immersion devices. He created this expression as part of
his company’s marketing and advertising strategy, but never gave it a precise definition.
1.1
An Epistemological Obstacle
The emergence of the notion of virtual reality illustrates the vitality of the interdisciplinary exchanges between computer graphics, computer-aided design, simulation,
teleoperations, audiovisual, etc. However, as the philosopher Gaston Bachelard points
out in his epistemological study on the Formation of the Scientific Mind [Bachelard 38],
any advances that are made in science and technology must face numerous epistemological
obstacles. One of the obstacles that virtual reality will have to overcome is a verbal obstacle (a false explanation obtained from an explanatory word): the name itself is meaningless
a priori, yet at the same time, it refers to the intuitive notion of reality, one of the first
notions of the human mind.
According to the BBC English Dictionary (HarperCollins Publishers, 1992), Virtual: means that something is so nearly true that for most purposes it can be regarded
as true, it also means that something has all the effects and consequences of a particular
thing, but is not officially recognized as being that thing. Therefore, virtual reality is a
quasi-reality that looks and acts like reality, but is not reality: it is an ersatz, or substitute
for reality. In French, the literal translation of the English expression is réalité virtuelle
which, as pointed out in the quote at the beginning of this introduction, is an absurd
and inadequate expression. The definition according to the Petit Robert (Editions Le
1
Texpo’89 in San Francisco (USA)
7
Introduction
Robert, 1992) is: Virtuel : [qui] a en soi toutes les conditions essentielles à sa réalisation.
Therefore, réalité virtuelle would be a reality that would have in itself all of the basic
conditions for its realization; which is really the least for reality! So, from English to
French, the term virtual reality has become ambiguous. It falls under a rhetorical process called oxymoron, which combines two words that seem incompatible or contradictory
(expressions like living dead, vaguely clear and eloquent silence are all oxymora). This
type of construction creates an original, catchy expression, which is geared more towards
the media than science. Other expressions such as cyber-space [Gibson 84], artificial reality [Krueger 83], virtual environment [Ellis 91] and virtual world [Holloway 92], were also
coined, but a quick Web search2 proves that the antonymy virtual reality remains widely
used.
The term’s ambiguity is confusing and creates a real epistemological obstacle to the
scientific development of this new discipline. It is up to the scientists and professionals in
this field to create a clearer epistemological definition in order to remove any ambiguities
and establish its status as a scientific discipline (concepts, models, tools); particularly in
regards to areas such as modeling, simulation and animation.
1.2
An Original Approach
This document proposes an original contribution to the idea that this new disciplinary
field of virtual reality needs to be clarified.
The word autonomy best characterizes our approach and is the common thread that
links all of our research in virtual reality. Everyday, we face a reality that is resistant,
contradictory and that follows its, and not our, own rules; in one word, a reality that
is autonomous. Therefore, we think that virtual reality will become independent of its
origins if it autonomizes the digital models that it manipulates in order to populate
realistic universes, which are already created through computer-generated digital images
for us to see, hear and touch, with autonomous entities. So, like Pinocchio, the famous
Italian puppet, models that become autonomous will take part in creating their virtual
worlds (Pinocchio Metaphor3 , Figure 1.1). A user, partially free from controlling his
model, will also become autonomous and will take part in this virtual reality as a spectator
(the user observes the model: Figure 1.1a), actor (the user tries out the model: Figure
1.1b) and creator (the user modifies the model to adapt it to his needs: Figure 1.1c).
This document has three main focuses: the concepts that guide our work and are the
basis of the principle of autonomy (Chapter 2), the models that we developed to observe
this principle (Chapter 3), and the tools that we used to implement them (Chapter 4).
In Chapter 2, analyzing the historical context, as well as exploring the real in philosophy
and the virtual in physics, bring us to a clearer definition of virtual reality that centers
2
A simple search using the search engine www.google.com gives the following results: cyber(-)space(s)
≈ 25,540 hits, virtual reality(s) ≈ 19,920, virtual world(s) ≈ 15,030, virtual environment(s) ≈ 3,670,
artificial reality(s) ≈ 185. In English the classification is the same.
3
This metaphor was inspired by Michel Guivarc’h, official representative of the Communauté Urbaine
de Brest (Brest Urban Community).
8
An Original Approach
a. user-spectator
b. user-actor
c. user-creator
Figure 1.1: Pinocchio Metaphor and the Autonomization of Models
around the principle of autonomy of digital models.
In Chapter 3, we recommend a multi-agent approach for creating virtual reality applications based on autonomous entities. We use significant examples to study the principle
of autonomy in terms of collective auto-organization (example of blood coagulation) and
individual perception (example of a sheepdog).
Chapter 4 describes our oRis language — a programming language that uses concurrent active objects, which is dynamically interpreted and has an instance granularity —
and its associated simulator; the tool that helped build the autonomous models in the
applications that we had already created.
Finally, in Chapter 5, the conclusion, we review our initial research and outline the
future direction of research in perspectives.
9
Introduction
10
Chapter 2
Concepts
2.1
Introduction
In this chapter, we present the main epistemological concepts that guide our research
work in virtual reality.
First, we will review the historical conditions surrounding the emergence of virtual
reality. We will establish that in the early 90’s — when the term virtual reality appeared
— the sensory rendering of computer-generated images had become so realistic that it
became necessary to focus more specifically on virtual objects than their images.
Secondly, an overview of the philosophical notion of reality will allow us to define the
conditions that we feel are necessary for the illusion of the real; in other words, perception,
experiments and modification of the real.
Next, we will use the notion of virtual in physics to explain our design of a reality that
is virtual. Several simple, yet significant, examples borrowed from optics, demonstrate the
importance of underlying models in understanding, explaining and perceiving phenomena.
Virtual will then definitely emerge as a construction in the model universe.
Finally, by analyzing the user’s status while he is using these models, we will postulate
an autonomy principle that will guide us in designing our models (Chapter 3) and creating
our tools (Chapter 4).
2.2
Historical Context
Epistemology is not particularly concerned with establishing a chronology of scientific discovery, it concentrates instead on updating the conceptual structure of a theory or work,
which is not always evident, even for its creator. Epistemology must try and figure out what
connects concepts and plays a vital role in the architecture of the structure: recognizing what
is an obstacle and what remains unclear and unsettled. To this effect, it should focus on
proving exactly how the conditions of the science that it is studying is a response, critique,
or maturation to previous states. [Granger 86]
Historically, the notion of virtual reality emerged at the intersection of different fields
such as computer graphics, computer-aided design, simulation, teleoperations, audiovisual, etc. Consequently, virtual reality was developed within a multidisciplinary environment where computer graphics played an influential role because, ever since Sutherland
created the first graphic interface (Sketchpad [Sutherland 63]) virtual reality has endeav-
11
Concepts
ored to make the computer-generated digital images that it displays on computer screens
more and more realistic. Therefore, we will focus on the major developments in computer
graphics before 1990 — during which the expression virtual reality became popular — by
taking a look at image construction, animation and user appropriation.
2.2.1
Constructing Images
From 2D Images to 3D Images
Computer graphics first had to undertake 2D problems such as displaying line segments
[Bresenham 65], eliminating hidden lines [Jones 71], determining segment intersections
for coloring polygons [Bentley 79], smoothing techniques [Pitteway 80] and triangulating
polygons [Hertel 83].
Research departments in the aeronautical and automobile industries quickly understood the benefit of computer-graphic techniques and integrated them as support tools in
designing their products, which led to the creation of computer-aided design (CAD). It
quickly evolved from its initial wireframe representations, and added parametric curves
and surfaces to it systems [Bézier 77]. These patch surfaces were easy to manipulate
interactively, and allowed the designer to create free shapes. However, when the object
shape had to follow strict physical constraints, it was better to work with implicit surfaces,
which are the isosurfaces of a pre-defined space scalar field [Blinn 82]. By triangulating
these surfaces, physics problems involving plates and shells could then be solved by using digital simulation (finished elements methods [Zinkiewicz 71]), as well as a realistic
display of the results [Sheng 92].
Still, surface modeling proved to be inadequate in ensuring the topological consistency
of modeled objects. Therefore, volume modeling was developed from two types of representation: boundary representation and volume representation. Boundary representation
models an object through a polyhedral surface made up of vertices, faces and edges. This
type of representation requires that adjacency relationships between faces, edges and vertices, in other words, the model’s topology, be taken into account. The topology can be
maintained through the edges (winged-edges [Baumgart 75], n-G-maps [Lienhardt 89]) or
in a face adjacency graph (Face Adjacency Graph [Ansaldi 85]). Volume representation
defines an object as a combination of primitive volumes with set operators (CSG: Constructive Solid Geometry [Requicha 80]). The object’s topology is defined by a tree (CSG
tree) whose leaves are basic solid shapes and whose nodes are set operations, translations,
rotations or other deformations (torsion [Barr 84], free-form deformations [Sederberg 86])
that are applied to underlying nodes. The elimination of hidden faces then replaced the
elimination of hidden lines and split into two large categories [Sutherland 74]: visibility
was either processed directly in the image-space, most often with a depth-buffer (z-buffer
[Catmull 74]), or in the object-space by partitioning this space (BSP: Binary Space Partitionning [Fuchs 80], quadtree/octree [Samet 84]).
Thus, wire-frame images became surface images and then became volume images
through the use of geometric and topological models that looked more and more like
real solid objects.
12
Historical Context
From Geometric Image to Photometric Image
The visual realism of objects needed more than just precise shapes: they needed
color, textures [Blinn 76] and shadows [Williams 78]. Therefore, computer graphics had
to figure out how to use light sources to illuminate objects. Light needed to be simulated
from its emission to its absorption at the point of observation, while taking into account
the reflective surfaces of the scene that was being illuminated. The first illuminated
models were empiric [Bouknight 70] and used interpolations of colors [Gouraud 71] or
surface normals [Phong 75]. Then came the methods that were based on the laws of
the physics of reflectivity [Cook 81]. Scenes were illuminated indirectly by shooting rays
[Whitted 80], or through a radiosity calculation [Cohen 85]; the two methods could also
be combined [Sillion 89]. From the observer’s perspective, in order to improve the quality
of visual rendering the effects of the atmosphere on light [Klassen 87], as well as the main
characteristics of cameras [Kolb 95], including their defaults [Thomas 86], needed to be
taken into account.
Hence, from representational art, computer-generated images progressively became
more realistic by integrating laws of optics.
2.2.2
Animating Images
From Fixed Image to Animated Image
During the early history of the movies, image sequences were manually generated to
produce cartoons. When digital images arrived on the scene, these sequences could be
generated automatically. At one time, a sequence of digital animation was produced
by generating in-between images obtained by interpolating the key images created by
the animator [Burtnyk 76]. Interpolation, usually the cubic polynomial type (spline)
[Kochanek 84], concerns key positions of the displayed image [Wolberg 90] or the parameters of the 3D model of the represented object [Steketee 85]. In direct kinematics, time
parameters are taken into account during interpolation. But, with inverse kinematics,
a spacetime path for the components of a kinematic chain can be automatically reconstructed [Girard 85] by giving the furthest positions of the chain and evolution constraints.
However, these techniques quickly proved to be limited in reproducing certain complex
movements, in particular, human movements [Zeltzer 82]. Therefore, capturing real movements from human subjects equipped with electronically-trackable sensors [Calvert 82] or
optical sensors (cameras + reflectors) [Ginsberg 83] emerged as an alternative.
Nevertheless, all of these techniques only reproduced effects without any a priori
knowledge of their causes. Methods based on dynamic models then operate on causes in
order to deduce effects. Usually, this approach leads to models that are regulated by differential equation systems with limit conditions, which then have to be resolved in order to
clarify the solution that it has implicitly defined. Different numerical resolution methods
were then adapted to animation: direct methods by successive approximation (RungeKutta type) worked better for poly-articulated rigid systems [Arnaldi 89], structural discretization of objects by finite elements was used for deformable systems [Gourret 89],
13
Concepts
and structural and physical discretization of systems by mass-spring networks was used to
model a large variety of systems [Luciani 85]. Overall problems of elastic [Terzopoulos 87]
or plastic deformation [Terzopoulos 88] were dealt with as well as collision detection between solids [Moore 88]. Particular attention was paid to animating human characters,
seen as articulated skeletons and animated by inverse dynamics [Isaacs 87]. In fact, animating virtual actors includes all of these animation techniques [Badler 93] because it
involves skeletons as well as shapes [Chadwick 89], walking [Boulic 90] as well as gripping
[Magnenat-Thalmann 88], facial expressions [Platt 81] as well as hair [Rosenblum 91], and
even clothes [Weil 86].
Through kinematics and dynamics, animation sequences could be automatically generated so that their movements were closer to real behavior.
From The Pre-Calculated Image to The Real-Time Image
At the beginning of the 20th century the first flight simulators had instruments, but
they had no visibility; computer-generated images introduced vision, first for simulating
night flights, (points and segments in black and white), then daytime flights (colored surfaces with the elimination of hidden faces) [Schumacker 69]. However, training simulators
required response times that corresponded to the simulated device in order for pilots to
learn its reactive behavior. Realistic rendering methods based on the laws of physics
resulted in calculation times in flight simulators that were often too long compared with
actual response times. Therefore, during the 60’s and 70’s, special equipment such as
the real-time display device1 was created to accelerate the different rendering processes.
These peripheral devices, which generated real-time computer-generated images, were
slowly replaced by graphics stations that would both generate and manage data structures on a general-use processor and render it on a specialized graphics processor (SGI
4D/240 GTX [Akeley 89]). At the same time, someone figured out how to reduce the
number of faces to be displayed through processes that were carried out before graphic
rendering [Clark 76]. On the one hand, the underlying data structures makes it possible
to identify, from the observer’s perspective, objects located outside of the observation
field (view-frustum culling), poorly directed faces (backface culling) and concealed objects
(occlusion culling) so that these objects are not transmitted to the rendering engine. On
the other hand, the farther away that an object is located from an observer, the more
its geometry can be simplified while maintaining an appearance, or else a topology, that
is satisfactory for the observer. This simplification of manual or automatic meshing that
generates discrete or continual levels of detail, proved to be particularly effective for the
digital land models that are widely used in flight simulators [Cosman 81].
Thus, thanks to optimization, calculation devices, and better adapted hardware devices, images can be calculated in real-time.
1
The Evans and Sutherland company (www.es.com) created the first commercial display devices connected to a PDP-11 computer. For several million Dollars, they allowed you to display several thousand
polygons per second. Today, a GeForce 3 for PC graphics card (www.nvidia.com) reaches performances
in the order of several million polygons per second for several thousand French Francs.
14
Historical Context
2.2.3
Image Appropriation
From Looking at an Image to Living an Image
Calculating an image in real-time means that a user can navigate within the virtual
world that it represents [Brooks 86a], but it was undoubtedly artistic experiments and
teleoperations that led users to start appropriating images.
In the 70’s, interactive digital art allowed users to experiment with situations where
digital images evolved through the free form movements of artists who observed, in
real-time and on the big-screen, how their movements influenced images (VideoPlace
[Krueger 77]). With the work of ACROE2 , computer-aided artistic expression took a
new step with multi-sensory synthesis aided by a force feedback management control
[Cadoz 84]. The peripherals, called retroactive gestural transducers, allow users, for example, to feel the friction of a real bow on a virtual violin string, to hear the sound made
by the vibration of the cord and to observe the cord vibrate on the screen. A physical
modeling system of mass-spring virtual objects pilots these peripherals (CORDIS-ANIMA
[Cadoz 93]), and allows users to have a completely original artistic experience.
For its part, teleoperations became involved very early in ways to remotely control
and pilot robots located in dangerous environments (nuclear, underwater, space) in order
to avoid putting human operators at risk. In particular, NASA developed a liquid crystal
head mounted display (VIVED: Virtual Visual Environment Display [Fisher 86]) coupled
with an electronically-trackable sensor [Raab 79] that allowed users to move around in a
3D image observed in stereovision. This was a real improvement in comparison to the first
head mounted display, which had cathode ray tubes, was heavier and was held on top of
the head by a mechanical arm fixed to the ceiling [Sutherland 68]. This new display was
very quickly combined with a dataglove [Zimmerman 87] and a spatialized sound system
based on the laws of acoustics [Wenzel 88]. These new functions enhanced the sensations
of tactile feedback systems [Bliss 70] and the force feedback systems (GROPE [Batter 71,
Brooks 90]) that were already familiar to teleoperators. The robotics experts were using
expressions such as telesymbiosis [Vertut 85], telepresence [Sheridan 87] and tele-existence
[Tachi 89] to describe the feeling of immersion that operators can have; the feeling that
they are working directly on a robot, for example, even though they are manipulating it
remotely. As it has since the beginning of teleoperations, adding an operator to the loop
of a robotized system poses the problem of how to include human factors to improve the
ergonomics of workstations and their associated peripherals [Smith 89].
By becoming multimodal, the image allows users to see, hear, touch and move or
deform virtual objects with sensations that are almost real.
2
ACROE: Association
(www-acroe.imag.com)
for
the
Creation
and
Research
on
Expression
Tools,
Grenoble
15
Concepts
From Isolated Image to Shared Image
During the 70’s, ARPANET3 , the first long-distance network based on TCP/IP4
[Cerf 74] was developed. Given these new possibilities of exchanging and sharing information, the American Defense Department decided to interconnect simulators (tanks,
planes, etc.) through the ARPANET network in order to simulate battle fields. So,
the SIMNET (simulator networking) project, which took place between 1983 and 1990,
tested group strategies that were freely executed by dozens of users on the network, and
not controlled by pre-established scenarios [Miller 95]. The communication protocols established for SIMNET [Pope 87] introduced prediction models of movement (estimate:
dead-reckoning) in order to avoid overloading the network with too many synchronizations, and at the same time made sure that there was a time coherence between simulators. These protocols were then summarized and standardized for the needs of the
distributed interactive simulation (DIS: Distributed Interactive Simulation [IEEE 93]).
Thus, the computer network and its protocols enabled several users to share images
and then through these images to share ideas and experiences.
2.2.4
From Real Images to Virtual Objects
Interdisciplinarity
This brief overview of the historical context from which the concept of virtual reality
emerged illustrates the vitality of computer graphics, as also proven by events such as
Siggraph5 in the United States and Eurographics6 or Imagina7 in Europe. This new
expression and this new field of investigation, which is based upon computer graphics,
simulation, computer-aided design, teleoperations, audiovisual, telecommunications, etc.,
is being formed within an interdisciplinary melting pot.
Virtual reality thus seems like a catch-all field within the engineering sciences. It
manipulates images that are interactive, multimodal (3D, sound, tactile, kinesthetic, proprioceptive), realistic, animated in real-time, and shared on computer networks. In this
sense, it is a natural evolution of computer graphics. During the last ten years, a special effort has been made to integrate all of the characteristics of these images within the
same system8 , and to optimize and improve all of the techniques needed for these systems.
Virtual reality is therefore based on interaction in real time with virtual objects and the
feeling of immersion in virtual worlds: it is like a multisensory [Burdea 93], instrumental [Cadoz 94b] and behavioral [Fuchs 96] interface. Today, the major works in virtual
reality give a more precise and especially operational meaning to the expression virtual
16
3
ARPANET: Advanced Research Projects Agency Network of the American Defense Department.
4
TCP/IP : Transmission Control Protocol/Internet Protocol
5
Siggraph (www.siggraph.org): 29th edition in 2002
6
Eurographics (www.eg.org)
7
Imagina (www.imagina.mc): 21st edition in 2002
8
You will find in [Capin 99] a comparison of a certain number of these platforms.
The Reality of Philosophers
reality, which can be summed up by the definition established by the Virtual Reality Work
Group9 : a set of software and hardware tools that can realistically simulate an interaction
with virtual objects that are computer models of real objects.
Transdisciplinarity
With this definition, we changed our approach from researching real images in computer graphics to researching visualized virtual objects. The objects are not only characterized by their appearances (their images), but also their behavior, which is no longer
researched under computer graphics. So, images reveal both behaviors and shapes. Virtual reality is becoming transdisciplinary and is slowly moving away from its origins.
In 1992, D. Zelter had already proposed an evaluation of virtual universes based on
three basic notions: the autonomy of viewed objects, the interaction with these objects and
the feeling of being present in the virtual world [Zelter 92]. A universe is then represented
by a point on the AIP (autonomy, interaction, presence) index. Hence, on this index, 3D
movies have coordinates (0.0.1), a flight simulator (0.1.1), and Zelter would place virtual
reality at point (1.1.1). Still, we must point out that there was very little research done
on the problem of virtual object autonomy until now. We can, nevertheless, note that
animation, so-called behavior, introduced several autonomous behaviors through particle
systems, [Reeves 83], flocks of birds [Reynolds 87], shoals of fish [Terzopoulos 94], and of
course virtual humans [Thalmann 99].
An object’s behavior is considered autonomous if it is able to adapt to unknown
changes in its environment: so, it must be equipped with ways to perceive, respond and
coordinate between its perceptions and actions in order to react realistically to these
changes. This notion of autonomy is the root of our problem. With this new perspective,
we must return to the notion of virtual reality by exploring the real of philosophers (Section
2.3) and the virtual of physicists (Section 2.4) in order to understand why the autonomy
of virtual objects is vital to the realism of the images that they inhabit (Section 2.5).
2.3
The Reality of Philosophers
Imagine human beings living in an underground cave, which has a mouth open towards the
light and reaching all along the cave; here they have been from their childhood, and have
their legs and necks chained so that they cannot move, and can only see before them, being
prevented by the chains from turning round their heads. [...] And they see only their own
shadows, or the shadows of one another, which the fire throws on the opposite wall of the
cave? [...] And if they were able to converse with one another, would they not suppose that
they were naming what was actually before them? [...] It is without a doubt then, I said,
that in their eyes, reality would be nothing else than the shadows of the images.
Plato, The Republic, Book VII, ≈ 375 B.C.
The word real comes from the Latin res, meaning a thing. A reality is a thing that
exists, which is defined mainly by its resistance and recurrence. Reality is realized, it
resists and it goes on without end. But what is a thing, and in what does it exist? These
9
GT-RV: the French Virtual Reality Group of the National Center of Scientific Research and the French
Ministry of Education, Research and Technology (www.inria.fr/epidaure/GT-RV/gt-rv.html).
17
Concepts
questions are without a doubt as old as philosophy itself, and this is why philosophers
and scientists revisit the notion of intuitive and complex reality time and time again.
2.3.1
Sensory/Intelligible Duality
Almost 2,500 years ago, Plato, in his famous allegory of the cave, had already placed
human beings in a semi-immersive situation in front of a screen (the cave wall in front
of them) so that they could only see the projected images of virtual objects (the shadows
of images). This simple thought experiment is no doubt the first virtual reality experience and a prelude to the Reality Centers of today [Göbel 93]. Consequently, from René
Descartes10 I think, therefore I am, to physicist Bernard d’Espagnat’s [D’Espagnat 94]
Veiled Reality, all philosophical doctrines revolve around two extreme opinions (Figure 11
2.1). The first doctrine, objectivism, maintains that objects exist independently from us:
there is an objective reality that is independent of the sensory, experimental and intellectual conditions of its appearance. The second doctrine, subjectivism, relates everything
to subjective thinking in such a way that there is no point in discussing the real nature
of objects because reality is only an artifact, an idea. This game of opposites between
these two perspectives broadens our thinking: everyone finds their own balance between
the object and the consciousness that perceives it [Hamburger 95].
- Materialism (philosophy of the material) is the belief in matter over mind. This philosophy,
in which there are only physical-chemical explanations, was the philosophy among many of
the atomists of Antiquity, as well as scientists from the 19th and 20th centuries.
objectivism
materialism
6
empiricism
positivism
rationalism
idealism
?
subjectivism
- Empiricism (philosophy of induction) makes experience the
only source of knowledge; it justifies induction as the main
approach for going from a set of observed facts to stating a
general law.
- Positivism (philosophy of experience), empiric rationalism or
rational empiricism is based on the experimental method by
emphasizing the importance of hypothesis, which governs reasoning and controls experience: experimentation becomes deliberate, conscious and methodical. It gives deduction the
role of justifying laws after their inductive or intuitive formulation. The pragmatism of this school of thought, started by
A. Comte, and amended by, among others, G. Bachelard and
K. Popper, attracted a large number of scientists from the
19th and 20th centuries. (C. Bernard, N. Bohr, . . . ).
- Rationalism (philosophy of deduction) postulates that reason
is an organizational faculty of experience; through a priori
concepts and principles, it provides the formal framework for
all knowledge of phenomena by relying on a logical-deductive
approach.
- Idealism (philosophy of the idea) states that reality is not sensory, but ideal. Idealism
includes the idealist conceptions of Plato, R. Descartes, E. Kant, and G. Berkeley.
Figure 2.1: Reality and the Main Philosophical Doctrines
Without getting involved in this debate, let us note that it is through our senses
10
Descartes R., Discourse on Method, Book IV, 1637.
11
Figure 2.1 is freely inspired from the Baraquin and al. Dictionary of Philosophy [Baraquin 95].
18
The Reality of Philosophers
and mind that we understand the world. Certainly, the senses do not protect us from
illusions, mirage or hallucinations, in the same way that ideas can be arbitrary, disembodied or schizophrenic. Nevertheless, our only way to investigate reality is through the
sensory and the intelligible. Already ahead of his time, I. Kant 12 introduced the notion
of noumenon — the object of understanding, from the intelligible — as opposed to the
notion of phenomenon — the object of possible experience that appears in space and
time, from the sensory. Recently, Edgar Moris and his paradigm of complexity, have
blurred the line between the sensory and the intelligible. He postulates a dialogical principle that can be defined as the necessary association of antagonisms in order for an
organized phenomenon to exist, function and develop. In this way, a sensory/intelligible
dialogue controls perception from the sensory receptors to the development of a cognitive
representation [Morin 86]. In a similar perspective, physicist Jean-Marc Levy-Leblond believes that the antinomial pairs that structure thought are irreducible: for example, continuous/discontinuous, determined/random, elemental/compound, finished/infinite, formal/intuitive, global/local, real/fiction, etc. He then uses these examples, borrowed from
physical science, to demonstrate how these pairs of opposites are inseparable, and how
they structure our scientific understanding of nature, and consequently our understanding
of reality [Lévy-Leblond 96].
Therefore, in a dialogue between the objective and subjective, reality becomes descriptive, not of the thing in itself, but of the object as it is seen by man through the sensory,
experimental and intelligible conditions of its appearance. The boundary between the
opposing sides of this dialogue is continuously redrawn through interactions between the
world and man. The senses can be deceiving and the mind imagines actions to improve
its understanding of the world; these actions cause our senses to react, which in return,
keeps our ideas from becoming disembodied. Each person then creates their own reality
by continuously switching between the senses and the imagination. Science, in turn, is
trying to give them a universal representation that is free of individual illusions.
2.3.2
Virtual Versus Actual
While the concept of reality has always haunted philosophers, the concept of virtuality
in itself has only been a topic of research for about twenty or thirty years. The word virtual,
from the Latin virtus (virtue or strength), describes that which in itself has all the basic
conditions for its actualization. In this sense, virtual is the opposite of actual, the here
and now, that which is in action, and not in effect. This meaning of virtual, which dates
back to the Middle Ages, is analyzed in detail by Pierre Levy. He recognized that the real,
the possible, the actual and the virtual were four different modes of existence and that
the possible and the real were opposites and the virtual and the actual were opposites
(Figure 2.2 [Lévy 95]). The possible is an unproven and latent reality, which is realized
without any changes in its determination or nature according to a pre-defined possibility.
The actual appears here and now as a creative solution to a problem. The virtual is a
dynamic configuration of strengths and purposes. Virtualization transforms the initial
12
Kant E., Critique of Pure Reason, I, Book II, Chapiter III, 1781.
19
Concepts
actuality into a solution for a more widespread problem.
POSSIBLE
REAL
realization
all
of the
predetermined
possibilities
persistent
and
resistant
potentialization
VIRTUAL
ACTUAL
actualization
all
of the
problems
Latent pole
virtualization
specific
solution
to a
problem
here
and now
Visible pole
Figure 2.2: The Four Modes of Existence
To illustrate this contrast between virtual and actual, let’s examine the case of a company that decides to have all of its employees telecommute. Currently, all of the employees
work during the same hours and in the same place. However, with telecommuting, there
is no need to have all of the employees physically present in the same place, because
the workplace and employee schedule now depend on the employees themselves. The
virtualization of the company is then a radical modification of its space-time references;
employees can work from anywhere in the world, 24 hours a day. This takes place through
a type of dematerialization in order to strengthen the information communications within
the company. For the company, this is a sort of deterritorialization where the coordinates
of space and time always pose a problem, instead of a specific solution. In order to idealize
this new company, the mind creates a mental representation based on the metaphor of
the actual company. It begins by interpreting the function of the virtual company by
analogy to that of a typical company; then gradually through its understanding, forms a
new idea of the notion of a company. The virtual company is not fictitious: even though
it is detached from the here and now, it still provides services and advantages!
Today, the adjective virtual has become a noun. We say the virtual as we speak of the
20
The Reality of Philosophers
real, and this evolution is accompanied by a change of the senses that makes the contrast
between the virtual and the actual less clear. For epistemologist Gilles-Gaston Granger,
the virtual is one the categories of objective knowledge, along with the possible and the
probable, that is essential to the creation of science objects from actual phenomena of
experience. The virtual is a representation of objects and facts that is detached from
the conditions of a complete, singular and actual experience [Granger 95]. For Philippe
Quéau, however, the virtual is realistically and actually present: it is an art that makes
imitations that can be used. It responds to the reality of which it is a part [Quéau 95].
In the middle of these two extremes, Dominique Noël bases his theory on the semantics
of S. Kripke’s [Kripke 63] possible worlds in order to arrange possible worlds according
to a relationship of accessibility, which says the following: a world M 1 is accessible to
another world M2 if what is relatively possible for M1 is also relatively possible for M2 .
The relationship of accessibility R makes up the hypothesis of an initial world M 0 , which
is interpreted as the actual world. The worlds M1 , M2 , etc., are inferred from M0 by a
relationship R and are both interpreted as well as virtual. This provides a formalism for
mapping out the main concepts of virtual [Noël 98].
2.3.3
Philosophy and Virtual Reality
Even though philosophers have been divided over the concept of reality since its origins,
these same philosophers all agree that, no matter what the status of an object in itself - i.e.
does it exist independently of the person that perceives it or not - there are three types of
mediation that occur between the world and man: mediation of the senses (the perceived
world), mediation of the action (the experienced world) and mediation of the mind (the
imagined world). These three mediations are inseparable and create three perspectives of
the same reality (Figure 2.3).
In the first stage of our study, we will demonstrate that, as reality, virtual reality must
also allow this triple mediation of senses, action and mind. In other words, we must be
able to respond affirmatively to the three following questions:
1. Is it accessible to our senses?
2. Does it respond to our prompts?
3. Do we have a modifiable representation?
But how would such a reality be virtual? The widespread use of the term virtual in
all circles of society, from scientists to philosophers, from artists to the general public,
changes its definition, which still appears to be a work in progress, and even renews the
concept of reality. For our purposes, we have chosen to refer to the physical science notion
of virtual to explain our conception of a reality that is virtual.
21
Concepts
Experiment
Action
•
Perception
Senses
•
A
A
A
A
A
A
A
Reality
A
A
A
A
A
A A•
- Mediation of the senses enables the perception of reality.
- Mediation of the action enables one to experiment on reality.
- Mediation of the mind forms a mental representation of reality.
Mind
Representation
Figure 2.3: The Three Mediations of The Real
2.4
The Virtual of Physicists
Geometrical optics are based entirely on the representation of light rays by which light would
propogate. By designing reflections and refractions from them, these rays end up being seen
as real physical objects. [...] The image of both automobile or boat headlight beams, or those
modern, thin laser beams or light-conducting glass fibers, impose the too bright feeling of
familiar reality. Nevertheless, are these light rays of theory, these lines without thickness,
something else besides beings of reason and constructions, which are powerful, certainly, but
purely intellectual? [Lévy-Leblond 96]
Physicists call a certain number of concepts virtual: for example, virtual images in
optics, virtual work in mechanics, virtual processes of interaction in particle physics,
etc., and many different models whose explanatory and predictive capabilities are widely
used to obtain more knowledge about the phenomena that they are trying to represent.
In fact, this notion of virtual seems to go hand in hand with the mathematization of
physical sciences that has been taking place since the 17th century. The mathematical
formalism used in physics is becoming more and more inaccessible to ordinary language,
and is in fact, putting itself out of touch with intuitive representation. One is forced then
to use analogies, images and words, which can only be understood through calculations.
Even though these intermediate notions are not adequate enough to be fully credible,
they nevertheless explain the physicist’s approach [Omnès 95].
2.4.1
Virtual Images
Geometrical optics deals with light phenomena by studying how light rays work in
transparent environments. It is based on the notion of independent light rays that travel
in a straight line in transparent, homogeneous environments and whose behavior on the
surface of a dioptre or a mirror is explained by the laws of Snell-Descartes (see for example
[Bertin 78]).
22
The Virtual of Physicists
To demonstrate these ideas, let us examine the case of an optical system (S) surrounded by an entrance face L1 and an exit face L2 , which separates two homogeneous
environments, indicated respectively as n1 and n2 (Figure 2.4). By definition, if the light
travels from left to right, the environment located to the left of face L 1 is called the object
environment, the environment to the right of face L2 is called the image environment. Let
O
(S)
object environment (n1)
L1
I
optical axis
image environment
(n2)
L2
Figure 2.4: Optical System
O be a punctual light source that sends light rays, also known as incident rays, onto L 1 .
Point O is an object for (S), and if, after having crossed the system (S), emerging light
rays all pass by a same point I, this point I is called image O through the system (S).
Two different scenarios are then identified:
- Emergent rays actually pass through I: it is said that I is the real image of object
O; this is the case for a converging lens as shown in the Figure 2.5a;
- Only the extension of these rays pass through I: it is said then that I is the virtual
image of O; this is the case for a diverging lens as shown in Figure 2.5b. A real
image can be materialized on a screen and placed in the plane where it is located,
but this is not the same for a virtual image that is only a construction of the mind.
Thus, in the case of a real image, if a powerful laser is placed in O, a small piece of paper
will burn in I (concentration of energy), but in the case of a virtual image, the paper will
not burn in I (no concentration of energy).
L
L
optical axis
O
I
a) converging lens
O
I
b) diverging lens
Figure 2.5: Real Image (a) and Virtual Image (b)
In a transparent, isotropic environment, which may or may not be homogeneous, the
path of light is independent of the direction of travel: if a light ray leaves from point O
to go towards point I by following a certain path, another light ray can leave from I and
23
Concepts
follow the same path in the opposite direction to go towards O. Thus, the roles of object
and image environments can be interchanged: this property is known as the principle of
the reversibility of light. Let’s look again at Figure 2.5. In the case of converging lens, it
is easy to position a punctual source in I, whose rays converge in O. In the case of the
diverging lens depicted in Figure 2.6a, a device must be used. Let us position a punctual
source in O 0 so that it forms, through an auxiliary converging lens L 0 , an image that, if
L did not exist, would form in I (Figure 2.6b). Because of the presence of L, the rays
converge in O (Figure 2.6c): therefore, we say that I plays the role of a virtual object for
lens L.
L
a)
O
I
b)
O’
I
L’
To illustrate the principle of reversibility in a diverging lens that
creates a virtual image I (Figure 2.6a), from object O, a device
must be used. Let us position a
punctual source in O 0 so that it
forms, through an auxiliary converging lens L0 , an image that, if
L did not exist, would form I (Figure 2.6b). Because of the presence
of L, the rays converge in O (Figure 2.6c): therefore, we say that
I plays the role of a virtual object
for lens L.
L
c)
O
O’
I
L’
Figure 2.6: Virtual Image (a), Real Image (b), and Virtual Object (c)
These notions of virtual images and virtual objects in geometrical optics correspond
to geometric constructions, which are the focal point for light ray extensions. These are
intermediary concepts, created as such, which cannot be materialized but can facilitate
image construction, and for this reason, are widely used in designing optical devices.
The images of virtual reality then, are not virtual. These are essentially computergenerated images, produced by a computer, which like any image, reproduce a reality
of the physical world or display a mental abstraction. They materialize on our terminal
screens and are, therefore, as defined by opticians, real images. However, they give us
virtual objects to look at, and it is the quality of the visual rendering that creates in part,
the illusion of reality.
24
The Virtual of Physicists
2.4.2
Image Perception
As in most older disciplines, the fundamentals of geometrical optics are based on
concepts derived directly from common sense and acquired from daily experience. The
same applies to the notion of light ray, of rectilinear propagation and the principle of
reversibility. These notions are so instilled in us13 , so effective in the normal perception of
our environment, that they cause us to make mistakes of interpretation that cause some
illusions and mirage. These illusions are in fact solutions proposed by the brain after a
light follows the simulation of a path: it travels the path in the opposite direction (principle
of reversibility), in a straight line (rectilinear propagation), by bypassing the diopter and
mirrors (no refraction, no reflection), and by assuming a homogeneous environment!
O
I
O
I
O
Earth
I
a) mirror
b) ray bending
c) mirage
Each morning, we see our image behind the mirror, even though there is nothing behind it except a mental
image formed in our brain . . . that forgot the laws of reflection (Figure 2.7a). In dry, clear weather, we can
see an island behind the horizon. The differences in density with the altitude within the atmosphere create a
non-homogeneous environment that causes the light rays to bend; if this variation in density is large enough,
the curve imposed on the light rays can effectively make the island appear above the horizon (Figure 2.7b).
Mirage is the same type of illusion (Figure 2.7c). The layers of air near the ground can become lighter than
the upper layers due to local warming: they then cause a curve of rays traveling towards the ground. The
simultaneous observation of an object O in direct vision and from an image I due to the curve of certain
rays, can also be interpreted by the presence of a body of water in which the object O would be reflected.
Figure 2.7: Illusions of Rectilinear Propagation and Opposite Return
These illusions are real in the sense that real images make an impression on our retina;
and they are virtual in the sense that how we interpret these real images leads us to assume
that virtual objects, which are the results of incomplete geometric constructions (Figure
2.7), exist. The interpretation is the result of an image construction simulation, for which
the brain, like an inexperienced student, would not have verified the application conditions
of the model that it used (here, a rectilinear propagation model of light in a transparent
and homogeneous environment). Geometric optics provides us with an interpretation of
a certain number of optical illusions, but to better understand them, we must, like the
heroine of Lewis Carroll14 , cross the mirror, go to the other side and enter into the model’s
universe.
13
Geneticists would say that they are coded, neurophysiologists that they are memorized, electronic
technicians that they are wired, computer specialists that they are compiled, etc. . . .
14
Lewis Carroll, Through the Looking Glass, 1872.
25
Concepts
2.4.3
Physics and Virtual Reality
Physicists base their models on the observation of phenomena, and in return, they
explain the observed phenomena through these models. It is the comparison between the
simulation of the model and the experiment of the real that allows this mutual development between observing and modeling a phenomenon. The brain also makes this kind
of comparison as it perceives our environment: it compares its own predictions to the
information that it takes from the environment. Thus, the senses act as both a source of
information and a way to verify hypotheses; perception problems are caused by unresolved
inconsistencies between information and predictions [Berthoz 97].
For physicists, the virtual is a tool for understanding the real; it definitely appears as
a construction in the universe of models. Consequently, the same applies to geometric
constructions that lead to optical virtual images, or the inclusion of virtual movements in
the determination of movement in mechanics. Everything happens as if is indeed the most
currently used expression by specialists when they are trying to explain a phenomenon
which is part of their field of expertise. Just like the expression once upon a time that
begins fairy tales, everything happens as if is the traditional preamble that marks the
passage into the universe of the scientific model; it clearly puts the reader on guard: what
is going to follow only exists and makes sense within the framework of the model.
To continue along our line of thought, we will demonstrate that virtual reality is
virtual because it involves the universe of models. A virtual reality is a universe of models
where everything happens as if the models were real because they simultaneously offer the
triple mediation of senses, action and mind (Figure 2.8). The user can thus observe the
activity of the models through all of his sensory channels (mediation of senses), test the
reaction time of models in real-time through adapted peripherals (mediation of action),
and build the pro-activity of models by modifying them on-line to adapt them to his
projects (mediation of the mind). The active participation of the user in this universe of
models opens the way to a true model experiment and hence to a better understanding
of their complexity.
2.5
Principle of Autonomy
Geppetto took his tools and set out to make his puppet. “What name shall I give him? he
said to himself. I think I will call him Pinocchio.” [...] Having found a name for his puppet,
he began to work in good earnest, and he first made his hair, then his forehead, then his eyes.
The eyes being finished, imagine his astonishment when Geppetto noticed that they moved
and stared right at him. [...] He then took the puppet under the arms and placed him on the
floor to teach him to walk. Pinocchio’s legs were stiff and he could not move, but Geppetto
led him by the hand and showed him how to put one foot in front of the other. When his
legs became flexible, Pinocchio began to walk by himself and run about the room; until, he
went out the door, jumped into the street and ran off.
Carlo Collodi, The Adventures of Pinocchio, 1883
A virtual reality application is made up of a certain number of models (the virtual of
physics) that users must be able to perceive, experiment and modify in the conditions of
the real (the three mediations of the real of philosophers). Thus, users can jump in or out
of the system’s control loop, allowing them to operate models on-line. As for modelers,
26
Principle of Autonomy
reactivity
Experiment
Action
•
activity
Perception
Sense
•
A
A
A
A
A
A
A
Model
A
A
A
A
A
A A•
. The mediation of the senses allows you to
perceive the model by observing its activity.
. The mediation of the action allows you to
experiment on the model and thus test its
reactivity.
. The mediation of the mind creates a mental
representation of the model that defines its
pro-activity.
Mind
Representation
pro-activity
Figure 2.8: The Three Mediations of Models in Virtual Reality
they must integrate a user model (an avatar) into their system so that the user can be
effectively included and participate in the development of this universe of models.
2.5.1
Operating Models
The three main operating modes of models are perception, experiment and modification. Each one corresponds to a different mediation of the real [Tisseau 98b].
The mediation of the senses takes place through model perception: the user observes
the activity of the model through all of his sensory channels. The same applies to a
spectator in an Omnimax theater who, seated on moving seats in front of a hemispheric
screen in a room equipped with a surround-sound system, really feels as if he is a part of
the animated film that he is watching, even though he cannot modify its course. Here,
the quality of the sensory renderings and their synchronization is essential: this is realtime animation’s specialty. The most current definition of animation is to make something
move. In the more specific field of animated films, to animate means to give the impression
of movement by scrolling a collection of ordered images (drawings, photographs, computergenerated images, etc.). These images are produced by applying an evolution model of
the objects in the represented scene.
The mediation of action involves model experiments: the user tests the model’s reactivity through adapted manipulators. The same applies to the fighter pilot at the controls
of a flight simulator: his training focuses mainly on learning the reactive behavior of the
plane. Based on the principle of action and reaction, the emphasis here is placed on the
quality of real-time rendering, which is what interactive simulation does best. The standard meaning of simulate is to make something that is not real seem as if it is. In the field
of science, simulation is an experiment on a model; it allows you to test the quality and
internal consistency of the model by comparing its results to those of the experiment on
the modeled system. Today, it is being used more and more to study complex systems in
27
Concepts
which human beings are involved to both train operators as well as research the reactions
of users. In these simulations where man is in the loop, the operator develops his own
behavioral model, which interacts with the other models.
The mediation of the mind takes place when the user himself modifies the model by
using an expressiveness that is equivalent to that of the modeler. The same applies to
an operator who partially reconfigures a system while the rest of the system remains
operational. In this fast-growing field of interactive prototyping and on-line modeling,
it is essential that users be able to easily intervene and express themselves. To obtain
this level of expressiveness, the user generally uses the same interface and especially the
same language as that of the modeler. The mediation of the mind is thus realized by the
mediation of language.
So, the closely related fields of real-time animation, interactive simulation and on-line
modeling represent three aspects of the operation of models. The three together allow
the triple mediation of the real that is needed in virtual reality and define three levels of
interactivity (Figure 2.9).
. Real-time animation corresponds to a zero level of interactivity between the user
and the model being run. The user is under the influence of the model because he
cannot act on the parameters of the model: he is simply a spectator of the model.
. Interactive simulation corresponds to the first level of interaction because the user
can access some of the model’s parameters. The user thus plays the role of the actor
in the simulation.
. In on-line modeling, the models themselves are the parameters of the system: the
interaction reaches the highest level. The user, by modifying himself the model
being run, participates in the creation of the model and thus becomes a cre-actor
(creator-actor).
interactivity
mediation
cre−actor
on−line modelization
mind
2
actor
interactive simulation
action
1
spectator
real−time animation
senses
0
Figure 2.9: The Different Levels of Interaction in Virtual Reality
2.5.2
User Modeling
The user can interact with the image through adapted behavioral interfaces. But,
what the user is able to observe or do within the universe of models is only what the
system controls through device drivers, which are vital intermediaries between man and
machine. The user’s sensory-motor mediation is thus managed by the system and then
modeled within the system in one manner or another. The only real freedom that the
28
Principle of Autonomy
user has is his decision-making options (mediation of the mind), which are restricted by
the system’s limitations in terms of observation and action.
It is also necessary to explain the inclusion of the user by representing it through a specific model of an avatar within the system. At the least, this avatar is placed in the virtual
environment in order to define a perspective that is needed for different sensory renderings.
It has virtual sensors, robot actuators (vision [Renault 90], sound [Noser 95], and hand
grips [Kallmann 99]) to interact with the other models. The data gathered by the avatar’s
virtual sensors are transmitted to the user in real time by device drivers, while the user’s
commands are transmitted in the opposite direction to the avatar’s robot actuators. It
also has methods of communication in order to communicate with the other avatars, these
methods thus strengthen its sensory-motor capabilities and allow it to receive and emit
language data. The avatar display may be non-existent (BrickNet [Singh 94]), reduced
to a simple 3D primitive, which is textured but not structured (MASSIVE [Benford 95]),
and assimilated into a polyarticulated rigid system (DIVE [Carlsson 93]) or a more realistic representation that includes evolved behaviors such as gestures and facial expressions
(VLNET [Capin 97]). This display, when it exists, makes it easier to identify avatars and
the non-verbal communication between them. Thus, with this explicit user modeling,
three major types of interaction can coexist in the universe of digital models:
. model-to-model interactions such as collisions and attachments;
. model-to-avatar interactions that allow sensory-motor mediation between a model
and a user;
. avatar-to-avatar interactions that allow avatars to meet in a virtual environment
shared by several users (televirtuality [Quéau 93a]).
The user’s status in virtual reality is different than his status in scientific simulations of
digital calculations or in the interactive simulation of training simulators (Figure 2.10). In
scientific simulation, the user sets the model’s parameters beforehand, and interprets the
calculation results afterwards. In the case of a scientific display system, the development
of the calculations may be observed with virtual reality sensory peripherals [Bryson 96],
but it remains the model’s slave. The systems of scientific simulation are model-centered
systems because science models want to create universal representations that are separate
from individual impressions from reality. In contrast, interactive simulation systems are
basically user-centered to give the user all of the ways needed to control and pilot the
model: the model must remain the user’s slave. By introducing the notion of avatar,
virtual reality places the user at the same conceptual level as the model. The masterslave relationship is thus eliminated for a greater autonomy for models and consequently,
a greater autonomy for users.
2.5.3
Autonomizing Models
To autonomize a model you must provide it with ways to perceive and respond to its
surrounding environment. It must also be equipped with a decision-making module so
that it can adapt its reactions to both external and internal stimuli. We use three lines of
thought as a guide to the autonomization of models: autonomy by essence, by necessity
and by ignorance.
29
Concepts
avatar
model
model
model
Figure 2.10: User Status in Scientific Simulation (a), in Interactive Simulation (b) and in
Virtual Reality (c)
Autonomy by essence characterizes all living organisms, from a simple cell to a human
being. Avatars are not the only models that can perceive and respond to their digital
environments: any model that is supposed to represent a living being must have this kind
of sensory motor interface. The notion of animat, for example, concerns artificial animals
whose rules of behavior are based on real animal behaviors [Wilson 85]. Like an avatar,
an animat is situated in an environment; it has sensors to acquire information about its
environment and effectors to react within this environment. However, unlike an avatar,
which is controlled by a human being, an animat must be self-controlled in order to coordinate its perceptions and actions [Meyer 91]. Control can be innate (pre-programmed)
[Beer 90], but in the animat approach, it will more often be acquired in order to stimulate
the genesis of adaptive behaviors for survival in changing environments. Therefore, research in this very active field revolves primarily around the study of learning (epigenesis)
[Barto 81], development (ontogeny) [Kodjabachian 98] and the evolution (phylogenesis)
[Cliff 93] of control architecture [Meyer 94, Guillot 00]15 . Animating virtual creatures
through these different approaches provides a very clear example of these adaptive behaviors [Sims 94], and modeling virtual actors falls under the same approach [Thalmann 96].
Thus, autonomizing a model associated with an organism allows us to understand the
autonomy observed in this organism more fully.
Autonomy by necessity involves the instant recognition of changes in the environment,
by both organisms and mechanisms. Physical modeling of mechanisms usually takes place
by solving differential equation systems. Solving these systems requires knowledge about
the conditions of the limits that restrict movement, but in reality, these conditions can
change all the time, whether the causes in themselves are known or not (interactions,
disturbances, changes in the environment). The model must then be able to perceive
these changes in order to adapt its behavior during execution. This is truer when the
user is in the system because, through his avatar, he can make changes that were initially
completely undefined. The example of sand flowing through an hourglass can be used
to illustrate this concept. The physical simulation of granular medium of sand is usually
based on micromechanical interactions between two solid spheres. Such simulations take
several hours of calculation to visualize the flow of sand from one sphere to another and
15
From Animals to Animats (Simulation of Adaptive Behavior): bi-annual conference since 1990
(www.adaptive-behavior.org/conf)
30
Principle of Autonomy
are therefore, unsuited for the constraints of virtual reality [Herrmann 98]. Modeling
with larger grains (mesoscopic level) based on punctual masses connected to each other
by appropriate interactions results in displays that are satisfactory, but not interactive
[Luciani 00]. Our approach involves large autonomous grains of sand that, individually,
detect collisions (elastic collisions) and are sensitive to gravity (free fall). It allows us to
not only simulate the flow of sand in the hourglass, but also to adapt in real-time to the
hourglass if it is turned over, or has a hole [Harrouet 00]. Thus, autonomizing any model
whatsoever, allows it to react to unplanned situations that come up during execution, and
which are the result of changes in the environment due to the activity of other models.
Autonomy by ignorance reveals our current inability to explain the behavior of complex
systems through the reductionist methods of an analytical approach. A complex system
is an open system made up of a heterogeneous group of atomic or composite entities.
The behavior of the group is the result of the individual behavior of these entities and
of their different interactions in an environment that is also active. Based on schools,
the behavior of the group is seen as either organized through a goal, which would be
teleological behavior [Le Moigne 77], or as the result of an auto-organization of the system,
which would be emergence [Morin 77]. The lack of overall behavior models for complex
systems leads us to distribute control over the systems’ components and thus, autonomize
the models of these components. The simultaneous evolution of these components enables
a better understanding of the behavior of the entire overall system. Hence, a group of
autonomous models interacting within the same space has a part in the research and
experiments of complex systems.
Autonomizing models, whether by essence, necessity or ignorance, plays a part in
populating virtual environments with artificial life that strengthens the impression of
reality.
2.5.4
Autonomy and Virtual Reality
The user of a virtual reality system is an all-in-one spectator, actor and creator of the
universe of digital models with which he interacts. Even better, he participates completely
in this universe where he is represented by an avatar, a digital model, that has virtual
sensors and robot actuators to perceive and respond to this universe. The true autonomy
of the user is found in his capability to coordinate his perceptions and actions, either by
accident, to simply wander around in this virtual environment, or by following his own
goals. The user is thus placed at the same conceptual level as the digital models that
make up this virtual world.
Different types of models — such as particles, mechanisms and organisms — coexist
within virtual universes. For the organism models, it is essential that they have ways to
perceive, act and decide in order to reproduce as best as possible their ability to decide
by themselves. This approach is also needed for mechanisms that must be able to react
to unforeseen changes in their environment. Moreover, our ignorance in understanding
systems once they become too complex, leads us to decentralize the control of the system’s components. The models, whatever they are, must then have investigation methods
equal to those of the user and decision methods that match their functions. A sensory
31
Concepts
disability (myopia, deafness, anesthesia) causes an overall loss of the subject’s autonomy.
This is also true for a mobility impairment (sprain, tear, strain) and for a mental impairment (amnesia, absent-mindedness, phobia). So, providing models with ways to perceive,
act and make decisions, means giving them an autonomy that places them at the same
operational level as the user.
Continuing along our line of thought, which began with the philosophers and physicists, we have established, in principle, the autonomization of digital models that make
up a virtual universe. A virtual reality is a universe of interacting autonomous models,
within which everything happens as if models were real because they provide users and
other models a triple mediation of senses, action and language all at the same time. (Figure 2.11). To accept this autonomy on belief, is to accept sharing the control over the
evolution of virtual universes between human users and the digital models that populate
these universes.
senses
senses
senses
action
action
avatar
model
model
language
language
language
senses
senses
action
model
language
action
senses
model
action
language
language
senses
senses
action
model
action
model
language
avatar
senses
action
action
model
language
language
Figure 2.11: Autonomy and Virtual Reality
This conception of virtual reality thus coincides with Collodi’s old dream, as described
in the quote located at the beginning of this section 2.5, of making his famous puppet an
autonomous entity that fulfills the life of his creator. Geppetto’s approach for reaching
his goal was the same as what we have found in virtual reality. He started by identifying
him (I will call him Pinocchio), then he worked on his appearance ([he] started by making
his hair, then his forehead), and making him sensors and actuators (then the eyes [...]).
He then defined his behaviors (Geppetto led him by the hand and showed him how to put
one foot in front of the other) in order to make him autonomous (Pinocchio began to walk
by himself) and finally, he could not help but notice that autonomizing a model means
that the creator loses control over his model (he jumped into the street and ran off).
32
Conclusion
2.6
Conclusion
As a result of interdisciplinary works on the computer-generated digital image, virtual
reality transcends its origins and has established itself today as a new discipline within the
engineering sciences. It involves the specification, design and creation of a participatory
and realistic virtual universe.
A virtual universe is a group of autonomous digital models in interaction in which
human beings participate as avatars. Creating these universes is based on an autonomy
principle according to which digital models:
. are equipped with virtual sensors that enable them to perceive other models,
. have robot actuators to act on the other models,
. have methods of communication to communicate with other models,
. and control their perception-response coordinations through a decision-making module.
An avatar is therefore a digital model whose decision-making abilities are delegated to
the human operator that it represents. Digital models, located in space and time, evolve
autonomously within the virtual universe, of which the evolution of the group is the result
of their combined evolution.
Man, model among models, is spectator, actor and creator of the virtual universe
to which he belongs. He communicates with his avatar through a language and various
sensory-motor devices that make the triple mediation of senses, action and language
possible. The multi-sensory rendering of his environment is that of, realistic, computergenerated digital images: 3D, sound, touch, kinesthetic, proprioceptive, animated in real
time, and shared on computer networks.
Therefore, the epistemological line of thought that we took in Chapter 2 places the
concept of autonomy at the center of our research problem in virtual reality. In the
following chapters we will ask ourselves how to implement these concepts by asking the
following: which models (Chapter 3) and which tools (Chapter 4) should be used to
autonomize models?
33
Concepts
34
Chapter 3
Models
3.1
Introduction
In this chapter, we present the main models that we have developed in order to observe
the principle of autonomy that was introduced in the previous chapter.
First, we will adapt the multi-agent approach to the problem of virtual reality by
introducing the notion of participatory multi-agent simulation.
Secondly, we will use the serious example of blood coagulation to explain the phenomenon of collective auto-organization that takes place in multi-agent simulations. Through
this example, we will defend the idea of in virtuo experimentation; a new type of experimentation that is possible today in biology.
We will then explain how to model perceptive behavior with the help of fuzzy cognitive
maps. These maps allow us to define the behavior of an autonomous entity, control its
movement, and simulate its movement through the entity itself so that it can anticipate
its behavior through active perception.
These models are thus our intermediary representations between the concepts (Chapter
2) that inspire them and the tools (Chapter 4) that implement them.
3.2
Autonomous Entities
The unit of analysis is therefore the activity of the person in real life. It does not refer to
the individual, nor the environment, but the relationship between the two. The environment
does not just modify actions, it is part of the system of action and cognition. [...] A real
activity then, is made up of flexibility and opportunism. From this perspective, and contrary
to the theory of problem solving, a human being does not engage in action with a series of
rationally pre-specified objectives according to an a priori model of the world. He looks for
his information in the world. What becomes normal, is the way that he integrates himself
to act in an environment that changes and that he can modify, and how he uses and selects
available information and resources: social, symbolic and material. [Vacherand-Revel 01]
Despite having become more realistic, virtual universes will lack credibility as long
as they are not populated with autonomous entities. The autonomization of entities is
divided into three modes: the sensory-motor mode, the decision-making mode and the
operational mode. It is actually based upon a sensory-motor autonomy: each entity is
provided with sensors and effectors that allows it to get information and respond to its
environment. It is also based on a decision-making autonomy: each entity makes decisions
35
Models
according to its own personality (its history, intentions, state and perceptions). Finally,
it requires execution autonomy: each entity’s execution controller is independent of the
other entities’ controllers.
In fact, this notion of autonomous entity is similar to the notion of agent in the
individual-centered approach of multi-agent systems.
3.2.1
Multi-Agent Approach
The first works on multi-agent systems date back to the 80’s. They were based upon
the asynchronism of language interactions between actors in distributed artificial intelligence [Hewitt 77], the individual-centered approach of artificial life [Langton 86], and the
autonomy of moving robots [Brooks 86]. Currently, there are two major trends within this
disciplinary field: either attention is focused on agents as they are (intelligent, rational
or cognitive agents [Wooldridge 95]), or the focus is on the interactions between agents
and collective aspects (reactive agents and multi-agent systems [Ferber 95]). Thus, we
can find agents that reason through beliefs, desires and intentions (BDI: Belief-DesireIntention [Georgeff 87]), agents that are more emotional, as in some video games (Creatures [Grand 98]) or stimulus response-type agents that are purely reactive, as in insect
societies (MANTA [Drogoul 93]). In any case, these systems differ from the symbolic
planning models of classic Artificial Intelligence (STRIPS [Fikes 71]) by recognizing that
an intelligent behavior can emerge from interactions between agents that are more reactive
placed in an environment that is itself active [Brooks 91].
These works were motivated by the following observation: there are systems in nature,
such as insect colonies or the immune system for example, that are capable of accomplishing complex, collective tasks in dynamic environments without external control or
central coordination. Thus, research on multi-agent systems is striving towards two major
objectives. The first objective is to build distributed systems, which are capable of accomplishing complex tasks through cooperation and interactions. The second objective is
to understand and conduct experiments of collective auto-organization mechanisms that
appear when many autonomous entities are interacting. In any event, these models favor
a local approach where decisions are not made by a central coordinator familiar with each
entity, but by each of the entities individually. These autonomous entities, called agents,
only have a partial, and therefore incomplete, view of the virtual universe in which they
evolve. Each agent can be considered as an engine that is continuously following a triple
cycle of perception, decision and action:
1. perception : perception: it perceives its immediate environment through specialized
sensors,
2. decision: it decides what it must do, taking into account its internal state, its sensor
values and its intentions,
3. action: it acts by modifying its internal state and its immediate environment.
Two major types of applications are affected by multi-agent systems: distributed
problem-solving and multi-agent system simulation. In distributed problem-solving, there
36
Autonomous Entities
is an overall satisfaction function that allows us to evaluate the solution according to
the problem to be solved; while in simulation, we assume that there is at best, a local
satisfaction function for each agent [Ferber 97].
Like all modeling, the retained multi-agent approach simplifies the researched phenomenon. And in addition, this approach enables it, for the most part, to maintain its
complexity by allowing for diverse elements, structures and associated interactions. Today however, multi-agent modeling does not have the right theoretical tools to deduce the
behavior of the entire multi-agent system from the individual behavior of agents (direct
problem), or to induce through reasoning the individual behavior of an agent when we
know the collective behavior of a multi-agent system (reverse problem). Simulating these
models partly makes up for the lack of appropriate theoretical tools by allowing us to observe structures and behaviors that collectively emerge from local, individual interactions.
We retain this multi-agent simulation approach for virtual reality.
3.2.2
Participatory Multi-Agent Simulation
Many design methods for multi-agent systems have already been proposed [Wooldridge 01],
but none of them have really been established yet, as is the case of the UML method for
object design and programming [Rumbaugh 99]. We will retain here the Vowel approach,
which analyzes multi-agent systems from four perspectives: Agents, Environments, Interactions, and Organizations (the vowels A,E,I,O), which are all of equal paradigmic
importance [Demazeau 95]. In virtual reality, we suggest adding the vowel U (forgotten
in AEIO), as in U for User, to include the active participation of the human operator
in the simulation (man is in the loop). Thus, virtual reality multi-agent simulations will
become participatory.
A as in Agent
A virtual universe is a multi-model universe in which all types of autonomous entities
can coexist: from the passive object to the cognitive agent, with a pre-programmed
behavior or an adaptive, evolutionary behavior. An agent can be atomic or composite, and
be the head of another multi-agent system itself. Consequently, a principle of component
heterogeneity must prevail in virtual worlds.
E as in Environment
An agent’s environment is made up of other agents, users that take part in the simulation (avatars) and objects that occupy the space and define a certain number of space-time
constraints. Agents are thus placed in their environment, a real open system in which
entities can appear and disappear at any time.
37
Models
I as in Interaction
Diverse entities (objects, agents and users) generate diverse interaction among themselves. We find physical phenomena such as attractions, repulsions, collisions and attachments, as well as exchanges of information among agents and/or avatars through gesture
and language transmissions (language acts or code exchanges to be analyzed). Destroying
and creating objects and/or agents also strengthens these different interaction possibilities. Consequently, new properties, unknown for different entities taken individually, will
eventually emerge from interactions between an agent and its environment.
O as in Organization
A set of social rules, like a driving rule, can structure all of the entities by defining
roles and the constraints between these roles. These roles can be assigned to agents or
negotiated among them; they can be known in advance or emerge from conflicting or
collaborative situations - the results of interactions among agents. So, both predefined
and emergent organizations structure multi-agent systems at the organizational level, a
given level which is both an aggregate of low-level entities and a high-level entity.
U as in User
The user can take part in the simulation at any time. He is represented by an avataragent of which he controls decisions and uses sensory-motor capabilities. He interfaces
with his avatar through different multi-sensory peripherals, and has a language to communicate with agents, modify them, create new ones, or destroy them. The structural
identity between an agent and an avatar allows the user, at any time, to take the place of
an agent by taking control of his decision-making module. During this substitution, the
agent can eventually go into a learning mode through imitation (PerAc [Gaussier 98]) or
by example [Del Bimbo 95]. At any time, the user can return the control to the agent that
he replaced. This principle of substitution between agents and users can be evaluated by
a type of Turing test [Turing 50]: a user interacts with an entity without guessing that it
is an agent or another user, and the agents respond to his actions as if he were another
agent.
The Vowel approach thus extended (AEIO + U) fully involves the user in multi-agent
simulation, hence concurring with the approach of participatory design (participatory design [Schuler 93]), which prefers to see users as human actors rather than human factors
[Bannon 91]. Such a participatory multi-agent simulation in virtual reality implements different types of models (multi-models) from different fields of expertise (multi-disciplines).
It is often complex because its overall behavior depends just as much on the behavior of
the models themselves as the interactions between the models. Finally, it must include
the free will of the human user who uses the models on-line.
38
Autonomous Entities
3.2.3
Behavioral Modeling
A model is an artificial representation of a phenomenon or an idea; it is an intermediary
step between the intelligible and the sensory, between the phenomenon and its idealization,
between the idea and its perception. This representation is based on a system of symbols
that have a meaning, not just for the designer who composes them, but also for the user
who perceives them (a user’s perspective of the semantics may be different from that of
the modeler’s). A model has multiple functions. For a modeler, the model allows him
to imagine, design, plan and improve his representation of the phenomenon or idea that
he is trying to model. The model becomes a communication medium to represent, make
people aware of, explain or teach related concepts. At the other end of the spectrum, the
model helps the user to understand the represented phenomenon (or idea); he can also
evaluate and experiment by simulation.
Three major categories of digital models are usually identified: descriptive, causal
and behavioral [Arnaldi 94]. The descriptive model reproduces the effects of the modeled
phenomenon without any a priori knowledge about the causes (geometric surfaces, inverse
kinematics, etc.). The causal model describes the causes capable of producing an effect
(for example, rigid poly-articulated, finished elements, mass-spring networks, etc.). Its
execution requires that the solution which is implicitly contained in the model be clarified;
the calculation time is then longer than that of a descriptive model for which the solution is
given (causal/dynamic versus descriptive/kinematic). The behavioral model imitates how
living beings function according to a cycle of perception, decision and action (artificial life,
animat, etc.). It involves both the internal and external behaviors of entities [Donikian 94].
Internal behaviors are internal transformations that can lead to perceptible changes on
the entity’s exterior (i.e. morphology and movement). These internal behaviors are
components of entities and depend little on their environment, unlike external behaviors
that reflect the environment’s influence on the entity’s behavior. These external behaviors
can be purely reactive or driven (stimulus/response), perceptive or emotional (response to
stimulus depends on an internal emotional state), cognitive or intentional (the response is
guided by a goal), adaptive or evolutionary (learning mechanisms allowing you to adapt
the response over the course of time), social or collective (the response is limited by the
rules of a society).
Section 3.3, which follows, presents a serious example of multi-agent simulation based
on reactive behaviors in order to observe the collective auto-organizing properties of such
systems. In another direction, Section 3.4 focuses on an emotional type of behavior to
highlight the difference between individual sensation and perception.
39
Models
3.3
Collective Auto-Organization
A protein is more than a translation in another language of one of the books from the gene
library. The genes that persist in our cells during our existence, contain — are made up
— of information. The real tools — the real actors — of cell life are the proteins, which
are more or less quickly destroyed and disappear if they are not renewed. Once the cell puts
them together, proteins fold up into complex shapes. Their shape determines their capability
to attach themselves to other proteins and interact with them. And it is the nature of these
interactions that determine their activity — their effect. [...] But trying to attribute an
unequivocal activity — an intrinsic property — to a given protein amounts to an illusion.
Its activity, effect and longevity depend on its environment, the collectivity of the other
proteins around it, the pre-existing ballet into which it will integrate itself. [Ameisen 99]
Faced with the growing complexity of biological models, biologists need new modeling
tools. The advent of the computer and computer science, and in particular virtual reality,
now offers new experiment possibilities with digital simulations and introduces a new type
of investigation: the in virtuo experiment. The expression in virtuo (in the virtual) is a
neologism created here by analogy with adverbial phrases from Latin etymology 1 in vivo
(in the living) and in vitro (in glass). An in virtuo experiment is thus an experiment
conducted in a virtual universe of digital models in which man participates.
3.3.1
Simulations in Biology
Medicine has used human (in vivo) experimentation since its beginnings. In antiquity,
this experiment was influenced by magic and religion. It was the Greek school of Kos that
demythologized medical procedure: around 400 B.C., Hippocrates, with his famous oath,
established the first rules of ethics by outlining the obligations of a practitioner towards
his patients. For a long time, life sciences remained more or less descriptive and were
based on observation, classification and comparison. It was not until the development of
physics and chemistry that (in vitro) laboratory examinations became a common practice. In particular, the progress in optics and the invention of more and more evolved
microscopes made it possible to go beyond the limits of direct observation. Italians
Malpighi (1628-1694) and Morgagni (1682-1771) hence became the precursors of histology by demonstrating the importance of studying the correlations between microscopic
biological structures and clinical manifestations. And undoubtedly, Frenchman Claude
Bernard and his famous Introduction to the Study of Experimental Medicine (1865) were
the start of real scientific medicine based on the rationalization of experimental methods.
Analogical simulation, or experiments on real models, was used for scientific purposes
starting in the 18th century with the biomechanical automata of Frenchman Jacques
de Vaucanson (The Flute Player, 1738), and reached its peak in the 20 th century, in
aeronautics, with wind tunnel tests on plane models. In medicine, (in vitro) laboratory
examinations are based on the same approach. In hematology laboratories, in vitro blood
1
in virtuo is not, however, a Latin expression. Biologists often use the expression in silico to
describe computerized calculations; there is for example, a scientific journal called In Silico Biology
(www.bioinfo.de/journals.html). However, in silico makes no reference to the participation of man
in the universe of digital models in progress; this is why we prefer in virtuo which, by its common root,
reflects the experimental conditions of virtual reality.
40
Collective Auto-Organization
samples used during laboratory analysis function as simplified representations of in vivo
blood: the rheological constraints are, in particular, very different.
Population Dynamics
Digital simulation, or experiments on virtual models, are usually based on solving a
mathematical model made up of partial derivative equations such as ∂x/∂t = f (x, t),
where t represents the time and x = (x1 , x2 , . . . , xn ) represents the vector of the state
variables that are characteristic of the system to be simulated. In molecular biology, the
macroscopic behavior of a system is often attained by coupling kinetics equations of chemical reactions with thermomechanical equations of molecular flow, which in first approximation solves equations such as ∂x/∂t = D∇2 x + g(x), where D is the system’s scattering
matrix and g is a non-linear function that represents the effects of chemical kinematics.
In hematology, this type of approach was applied starting in 1965 to the kinematics of
coagulation [Hemker 65], then developed and broadened to take into account new data
[Jones 94] or to demonstrate the influence of flow phenomena [Sorensen 99]. The solutions
that were obtained in this way strongly depended on the conditions of the system’s limits,
as well as the values given to different reaction times and diffusion parameters. Biological
systems are complex systems and modeling them requires a large number of parameters.
As Hopkins and Leipold [Hopkins 96] demonstrated in their critique of the use of differential equation mathematical models in biology, ignoring certain parameters of a model
known for being invalid can lead, despite everything, to valid adjustments with experiment data. To avoid this drawback, some authors encourage the use of statistical methods
in order to minimize the influence of specific values of certain parameters [Mounts 97].
Today, software tools such as GEPASI [Mendes 93] and StochSim [Morton-Firth 98] make
it possible to use this classic problem-solving approach of chemical kinematics through
differential equations at a molecular level, while tools such as vCell [Schaff 97] and eCell
[Tomita 99] are used at the cell level.
This type of simulation in cell biology is based on models that represent groups of
cells, not individual cells. Consequently, these models explicitly describe the behavior of
populations that, we hope, implicitly reflect the individual behavior of cells.
Individual Dynamics
In contrast, two major methods of digital simulation used in biochemistry, Monte
Carlo and molecular dynamics, are based on selecting microscopic theories (structures
and molecular interaction potentials) to determine certain macroscopic properties of the
system [Gerschel 95]. These methods explore the system’s configuration space through
an initial configuration of n particles whose development is followed by sampling. This
sampling is time-dependent in molecular dynamics, and random in Monte Carlo methods.
The macroscopic properties are then calculated as averages for all of the configurations
(microcanonical or canonical ensembles from statistical mechanics). The number n of
particles influences calculation times, usually proportional to n 2 , and limits the number
of sampled configurations, in other words, the average accuracy in a Monte Carlo method,
or the system’s evolution time in a molecular dynamics calculation. These methods require
41
Models
significant calculation methods; today, the size of the largest simulated systems is around
106 atoms for real evolution times less than 10−6 s [Lavery 98]. In addition, they mainly
involve systems close to equilibrium and because of this, are less suitable for studying
systems far from equilibrium and basically irreversible, as the case may be with the
phenomena of hemostasis.
Cellular automata [Von Neumann 66], inspired by biology, offer an even more individualized approach. Conway’s Game of Life [Berlekamp 82] popularized cellular automata
and Wolfram formalized them [Wolfram 83]. These are dynamic systems in which space
and time are discretized. A cellular automata is made up of a finite set of squares (cells),
each one containing a finite number of possible states. The squares’ states change synchronously according to the laws of local interaction. Although they are based on cellular
metaphor, cell automata are rarely used in cell biology; we can however, refer to the simulation of an immune system through a cellular automata in order to study the collective
behavior of lymphocytes [Stewart 89], the use of a cell automata to reproduce immune
phenomena within lymphatic ganglions [Celada 92], and the development of a programming environment based on cell automata to model different cell behavior [Agarwal 95].
The heavy constraints of time-space discretization as well as those of action synchronicity
are undoubtedly what inhibits the use of this technique in biology.
Multi-Agent Auto-Organization
To overcome these methodology limitations, we propose modeling biological systems
through multi-agent systems. The biological systems to be modeled are usually very complex. This complexity is mainly due to its diverse elements, structures and interactions.
In the example of coagulation, molecules, cells and complexes make blood an environment
that is open (appearance/disappearance and element dynamics), heterogenous (different
morphology and behaviors) and formed from composite entities (organs, cells, proteins)
that are mobile and distributed in space and in variable numbers in time. These elements can be organized into different levels known beforehand (a cell is made up of
molecules) or emerge during the simulation due to multiple interactions between elements
(formation of complexes). The interactions themselves can be different in nature and
operate on different space and time scales (cell-cell interactions, cell-protein interactions
and protein-protein interactions). There is currently no theory that is able to formalize
this complexity, in fact, there is no formal a priori proof method as there is for highly
formalized models. In the absence of formal proof, we must turn to experiments of systems that are being developed in order to be able to validate experiments a posteriori:
this is what we have set out to do with multi-agent modeling and the simulation of blood
coagulation.
As we define it within the framework of virtual reality, multi-agent simulation enables
real interaction with the modeled system as suggested by a recent prospective study
on the requirements of cell behavior modeling [Endy 01]. In fact, during simulation, it
is possible to view coagulation taking place in 3D; this visualization makes it possible
to observe the phenomenon as if you had a virtual microscope that was movable and
adjustable at will, and capable of different focus adjustments. The biologist can therefore
42
Collective Auto-Organization
focus on the observation of a particular type of behavior and observe the activity of a
sub-system, or even the overall activity of the system. At any moment, the biologist can
interrupt the coagulation and take a closer look at elements in front of him and at the
interactions taking place, and can then restart the coagulation from where he had stopped.
At any moment, the biologist can disrupt the system by modifying a cell property (state,
behavior) or removing or adding new elements. Thus, he can test an active principle
(for example, the active principle of heparin in thrombophilia), and more generally an
idea, and immediately observe the consequences on the functioning system. Multi-agent
simulation puts the biologist at the center of a real virtual laboratory that brings him
closer to experimental science methods, while at the same time giving him access to digital
methods.
In addition to being able to observe the activity of a digital model being run on a computer, the user can also test the reactivity and adaptability of the model in operation,
thus benefitting from the behavioral character of digital models. An in virtuo experiment
provides an experience that a simple analysis of digital results cannot. Between formal a
priori proof and a posteriori validations, there is room today for a virtual reality experienced by the biologist that can go beyond generally accepted ideas to reach experienced
ideas.
3.3.2
Coagulation Example
The analogy between a biological cell and a software agent is quasi-natural. In fact,
a cell interacts with its environment through its membrane and its surface receptors. Its
nucleus is the head of a genetic program that determines its behavior; consequently it can
perceive, decide and respond. By extension, the analogy between a multicellular system
and a multi-agent system is immediate.
Cell-Agent
Our design of a cell-agent revolves around three main steps. The first step is to choose
a 3D geometric shape that represents the cell membrane. The second step is to place
the sensors — the cell’s receptors — on this virtual membrane (Figure 3.1a). The third
step involves defining the cell-agent’s own behaviors (Figure 3.1b). A behavior can be
defined by an algorithm — calculation of a mathematical function, resolution of a system
of differential equations, application of production rules — or by another multi-agent
system that describes the inside of the virtual cell. The choice between algorithm and
multi-agent system depends on the nature of each problem within the cell and how much is
known about the problem. Therefore, we have chosen an algorithm to model the humoral
response [Ballet 97a] and a multi-agent system to simulate the proliferate response of B
lymphocytes to anti CD5 [Ballet 98a], as it was demonstrated experimentally [Jamin 96].
We have a library of predefined behaviors such as receptor expression on the cell’s surface,
receptor internalization, cell division (mitosis), programmed cell death (apoptosis) and
generation of molecular messengers (Figure 3.1c). New behaviors can be programmed
and integrated into a cell-agent so that it can adapt to a specific behavior of other cells.
43
Models
agent
agent
02321
behavior
agent
behaviors
02321
receptors
ln(x)
02321
a
agent
f(x)
g(x)
functions
f(x)
f(x)
02321
b
02321
reproduction
activation
internalization
expression
mutation
apoptosis
...
c
Designing a cell-agent revolves around three main steps. The first step is to choose a 3D geometric shape
that represents the cell membrane. The second step is to place the sensors — the cell’s receptors — on
this virtual membrane (a). The third step involves defining the cell-agent’s behaviors (b). A behavior can
be defined by an algorithm — calculation of a mathematical function, resolution of a system of differential
equations, application of production rules — or by another multi-agent system that describes the inside
of the virtual cell. The choice between algorithm and multi-agent system depends on the nature of each
problem within the cell and how much is known about the problem. A library of predefined behaviors such
as receptor expression on the cell’s surface, receptor internalization, cell division (mitosis), programmed cell
death (apoptosis) and generation of molecular messengers makes it possible to express a wide variety of
behaviors (c).
Figure 3.1: Cell-Agent Design
The multi-agent system is then obtained by placing an initial mixture of cells within a
virtual environment (i.e. a virtual test tube or virtual organ), in other words, a group of
cell-agents that each have a well-defined behavior. Within this universe, the interactions
between cell-agents are local: two cell-agents can only interact if they are within the
immediate neighborhood of the other cell-agent and if their receptors are compatible.
These interactions take into account the distance between receptors and their relative
direction, as well as their affinity [Smith 97]. At each cycle, the basic behavior of each
cell-agent determines it movement by taking into account the interactions it had in its
sphere of activity. If one of its receptors is activated through contact with a receptor from
another cell-agent, then a more complex behavior can be triggered if all of its preconditions
are validated.
Virtual Vein
We have thus created, in collaboration with the Hematology Laboratory of the Brest
University Hospital Center, a multi-agent model of a virtual vein [Ballet 00a]. The blood
vessels are lined with a layer of endothelial cells of which one of the roles is to stop the adhesion of platelets. The blood circulates in the vessels carrying cells and proteins that will
be activated when a vessel is torn in order to stop the bleeding. The stoppage of bleeding
is a local phenomenon that takes place after a cascade of cellular and enzymatic events
in a liquid environment made unstable by blood flow. The phenomena of hemostasis and
coagulation involve mobile cells (platelets, red blood cells, granulocytes, monocytes), fixed
cells (endothelial cells, subendothelial fibroblasts), procoagulant proteins and anticoagulent proteins. Each one of these elements has its own behavior; these different behaviors
become auto-organized, which results in the hole being blocked by platelets interwoven
with fibrin. Hence, our model takes into account the known characteristics of coagulation:
3 types of cells (platelets, endothelial cells and fibroblasts) and 32 types of proteins are
44
Collective Auto-Organization
included in 41 interactions (Figure 3.2). Some of these reactions take place on the cell
surface, and others in the blood.
Reagents
Products
Description
Exogenic Pathway
TF + VIIa
VII:TF
Coagulation
VII + VIIa
VIIa + VIIa
Coagulation
VII + VII:TF
VIIa + VII:TF
Coagulation
IX + VII:TF
IXa + VII:TF
Coagulation
X + VII:TF
Xa + VII:TF
Coagulation
TFPI + Xa
TFPI:Xa
Inhibition
TFPI:Xa + VII:TF 0
Inhibition
Endogenetic Pathway
VII + IXa
VIIa + IXa
Coagulation
IXa + X
IXa + Xa
Coagulation
II + Xa
IIa + Xa
Coagulation
XIa + IX
XIa + IXa
Coagulation
XIa + XI
XIa + XIa
Coagulation
IIa + XI
IIa + XIa
Retro-activation
IIa + VIII
IIa + VIIIa
Retro-activation
IIa + V
IIa + Va
Retro-activation
Xa + V
Xa + Va
Retro-activation
Xa + VIII
Xa + VIIIa
Retro-activation
Xa + Va
Va:Xa
Prothrombinase
VIIIa + IXa
VIIIa:IXa
Tenase
VIIIa:IXa + X
VIIIa:IXa + Xa
Tenase
Va:Xa + II
Va:Xa + IIa
Prothrombinase
Reagents
Products
Description
AT3 + IXa
0
Inhibition
AT3 + Xa
0
Inhibition
AT3 + XIa
0
Inhibition
AT3 + IIa
0
Inhibition
alpha2M + IIa
0
Inhibition
TM + IIa
IIi
Inhibition
AT3 + IXa
0
Inhibition
AT3 + Xa
0
Inhibition
AT3 + XIa
0
Inhibition
AT3 + IIa
0
Inhibition
ProC + IIi
PCa + IIi
Inhibition
PCa + Va
0
Inhibition
PCa + Va:Xa
Xa
Inhibition
PCa + PS
PCa:S
Inhibition
PCa + PCI
0
Inhibition
PCa:S + Va
0
Inhibition
PCa:S + V
PCa:S:V
Inhibition
PCa:S:V + VIIIa:IXa IXa
Inhibition
PCa:S:V + VIIIa
0
Inhibition
Activation of Platelets
P + IIa
Pa + IIa
Coagulation
Formation of Fibrin
I + IIa
Ia + IIa
Coagulation
The coagulation process is initiated on the surface of fibroblasts. It is the tissue factor (TF) on the surface of the
fibroblasts that initiates the coagulation process. The so-called exogenous pathway is activated: Factor VIIa binds
to the tissue factor and activates Factors VII, IX and X. The first thrombin molecules, key factor of the phenomenon,
are generated, which launches the endogenous pathway and the actual coagulation cascade. Thrombin retroactivates
the Factors XI, V, and VIII, and most importantly, the platelets. On the surface of the platelets thus activated, the
tenase and prothrombinase complexes are then in a position to be created and make their important contribution
to the formation of thrombin. Thrombin also activates the fibrinogen in fibrin, which then in turn binds together
the activated platelets that are over the hole. The blood clot forms. At the same time, the inhibitors enter in
action: the TFP1 binds to the activated Factor X to inhibit the exogenous pathway (Factor VIIa bound to the
tissue factor), the alpha2macroglobulin and the antithrombin3 inhibit the thrombin and the activated Factors X,
XI, IX. The Protein C is activated by the thrombin-thrombomodulin complex (IIi: thrombin inhibitor) cleaving
to the surface of the endothelial cells. The activated Protein C has two courses of action: it directly inhibits the
activated Factor V and indirectly, adhering to the Protein S and to the Factor V, inhibits the activated Factor VIII.
The PCI (Protein C Inhibitor) prevents the inhibitor action of the activated Protein C.
Figure 3.2: Interactions of the Coagulation Model
The wall of the vein is represented by a surface of 400 µm2 lined with endothelial cells.
In the middle of this virtual wall, a hole exposes 36 fibroblast cells, Willebrand factors
and tissue factors (TF). Platelets, the main factors of coagulation (Factors I (fibrinogen),
II (prothrombin), V, VII, VIII, IX, X, XI) and inhibition factors (alpha2macroglobuline
(alpha2M) and antithrombin3 (AT3)), TFPI (Tissue Factor Pathway Inhibitor), Protein
C (PC), Protein S (PS), thrombomodulin (TM) and the Protein C inhibitor (PCi), which
are all initially present in plasma, are placed randomly in the virtual vein.
In Virtuo Experiments
Multi-agent simulation can then begin: each agent evolves over time according to
its predefined behavior. Figure 3.3 presents six intermediary states of the phenomenon
through a 3D viewer. So, we can observe primary hemostasis and plasmatic coagulation
45
Models
in different phases and over time, measure how the quantity of each element evolves.
[Ballet 00b].
The 6 figures depict different stages of the evolution of the coagulation simulation; they were captured from
different viewing angles of the virtual vessel. Image A shows the geometry of the models at the beginning
of the simulation: the yellow spheres represent the endothelial cells. The hole in the vein, in the middle of
the endothelial layer, shows the subendothelial fibroblasts (blue spheres). The coaguluation and inhibition
factors in the plasma are represented by their Roman numerals, their Arabic numerals and their initials.
The unactivated platelets are black spheres. In the beginning, the exogenous pathway is activated, indicated
by the Factor VIIa’s bond on the tissue factor of the fibroblast membrane (red pyramid in Image B). The
inhibition of these VIIa:TF complexes by the activated TFPI-Factor X complex is very fast (disappearance
of red pyramids). Images C and D show the intermediary steps of the generation of thrombin: the Factor
XI binds to the platelets and forms a complex with the Factor IX that it activates (Image C); the tenase
complex (IXa:VIIIa), which also binds to the platelet surface, activates the Factor X (Image D). Image E
shows a prothrombinase complex (activated Factor V complex-activated Factor X) on the activated platelets
surface (red spheres), activating the prothrombin in thrombin. Finally, Image F demonstrates the adhesion
of platelets to the subendothelium and the formation of blood clots. The generated fibrin molecules are
represented by white and orange spheres (Image F).
Figure 3.3: Evolution of the Multi-Agent Simulation of Coagulation
Validating models is based on several factors:
. the similarities between thrombin generation curves obtained in virtuo and in vitro,
. the consistency with pathologies: decrease of thrombin generated in hemophilia
46
Collective Auto-Organization
(Figure 3.4a) and increase in thrombophilia,
. the correction of hemophilia through activated Factor VII (Figure 3.4b),
. the correction of hypercoagulability through heparin (Figure 3.4c).
Hence, several thousand entities2 , representing 3 types of cells and 32 types of proteins
involved in 41 types of interactions, auto-organize in a cascade of reactions to fill in a
hole that was not planned in advance, and which can be created at any time during the
simulation and anywhere in the vein. This significant example reinforces the idea that
reactive autonomous entities, through their conflicting and collaborative interactions, can
become auto-organized and adopt a group behavior unknown from individual entities.
a
b
c
Figure a compares the total number of thrombin molecules created during the simulation in the cases of
normal coagulation, hemophilia A (absence of Factor VIII) and hemophilia B (absence of Factor IX). We can
therefore observe a weaker coagulation in the case of hemophilia A and B: hemophilia A generates half the
thrombins of a normal coagulation, while hemophilia B generates three times less. Figure b compares the
thrombin generation curves in the case of normal coagulation, of hemophilia B (absence of Factor IX) and
hemophilia B treated by adding activated Factors VII. The difference in coagulation between the treated
hemophilia and normal coagulation is no more than 6.5% here. Figure c compares the total number of
thrombin molecules created during the simulation in the case of normal coagulation, the Factor V Leiden
mutation and Factor V Leiden treated with heparin. The Factor V Leiden mutation inhibits the deactivation
of activated Factor V by activated Protein C. This genetic disease increases the risk of venous thrombosis: the
number of thrombins formed is 4 times larger with the Factor V Leiden mutation than without it. Heparin
is an anticoagulent that stimulates the action of anti-thrombin III and thus corrects hypercoagulability.
Figure 3.4: Main Results of the Multi-Agent Coagulation Model
3.3.3
In Vivo, In Vitro, In Virtuo
The main qualities of a model — an artificial representation of an object or a phenomenon — are based on its capabilities to describe, suggest, explain, predict and simulate. Model simulation, or model experiment, consists of testing this representation’s
behavior under the effect of actions that we can carry out on the model. The results of a
simulation then become hypotheses that we try to prove by designing experiments on the
singular prototype of a real system. These experiments, rationalized in this way, make up
a true in vivo experiment. We distinguish four main types of models (perceptive, formal,
analogical, digital) that lead to five major simulation families.
2
Specific data is given in the thesis of Pascal Ballet [Ballet 00b].
47
Models
In Petto
Simulation of a perceptive model corresponds to in petto intuitions that come from
our imagination and the perception that we have of the system being studied. Thus, it
makes it possible to test perceptions on the real system. Unclassified and unreasoned
inspirations, idea associations and heuristics form mental images that have the power of
suggestion. The scientific approach will try to rationalize these first impressions while
artistic creation will make something from these digital or analogical works according to
the media used. But it is often the suggestive side of the perceptive model that sparks
these creative moments and leads to invention or discovery.
In Abstracto
Simulation of a formal model is based on an in abstracto reasoning carried out within
the framework of a theory. Reasoning provides predictions that can be tested on the real
system. Galle’s discovery of Neptune in 1846, based on theoretical predictions by Adams
and Le Verrier, is an illustration of this approach within the framework of the theory of
disturbances from two bodies in celestial mechanics. Likewise, in particle physics, the
discovery in 1983 of intermediate bosons W + , W − and Z 0 had been predicted a few years
before by the theory of electroweak interactions. Hence, from the extremely large to
the extremely small, the predictive characteristic of formal models has proven to be very
fruitful in many scientific fields.
In Vitro
Simulation of an analogical model takes place through in vitro experiments on a sample
or model constructed by analogy with the real system. The similarities between the model
and the system help us to better understand the system being studied. The wind tunnel
tests on plane models allowed aerodynamicists to better characterize the flow of air around
obstacles through the study of coefficients of similarity introduced at the end of the 19 th
century by Reynolds and Mach. Likewise, in physiology, the analogy of the heart as a
pump allowed Harvey (1628) to demonstrate that blood circulation fell under hydraulic
laws. Thus, from all times, the explanatory side of analogical models was used, with more
or less of an anthropocentric approach, to make what was unknown, known.
In Silico
Simulation of a digital model is the execution of a program that is supposed to represent the system to be modeled. In silico calculations provide results that are checked
against measurements from the real system. The digital resolution of mathematical equation systems corresponds to the most current use of digital modeling. In fact, analytical
determination of solutions often encounters problems that are just as likely to involve the
characteristics of the equations to be solved (non-linearity, coupling) as the complexity
of the limit conditions and the need to take into account very different space-time scales.
Studying the kinematics of chemical reactions, calculating the deformation of a solid under
48
Individual Perception
the effect of thermomechanical constraints, or characterizing the electromagnetic radiation of an antenna are classic examples of implementing differential equation systems on a
computer. Consequently, the digital model obtained by discretization from the theoretical
model has today become a vital tool for going beyond theoretical limitations, but is still
often considered as a last resort.
In Virtuo
More recently, the possibility of interacting with a program that is being run has
opened the way to real in virtuo experiments of digital models. It is now possible to disturb a model that is being run, to dynamically change the limit condition and to delete
or add elements during simulation. This gives digital models the status of being virtual
models, infinitely more malleable than the real model used in analogical modeling. Flight
simulators and video games were the precursors of virtual reality systems. However, these
systems become necessary when it is difficult, or even impossible to use direct experiments
for whatever reason: hostile environment, access difficulties, space-time constraints, budget constraints, ethics, etc. The user is able to go beyond simply observing the digital
model’s activity while it is being run on a computer, and test the reactivity and adaptability of the model in operation; thus benefitting from the digital model’s behavioral
characteristic.
The idea of a model representing the real is based on two metaphors, one artistic and
the other legal. The legal metaphor of delegation (the elected represents the people, the
papal nuncio represents the pope, the ambassador represents the head of state) suggests
the idea of replacement: the model takes the place of reality. The artistic metaphor of
realization (the play is performed in public, artistic inspiration is represented by a work
of art) proposes the idea of presence: the model is a reality. An in virtuo experiment
of a digital model makes it truly present and thus opens up new fields of exploration,
investigation and understanding of the real.
3.4
Individual Perception
From time to time, one runs into a figure from a small part of [such] a network of symbols,
in which each symbol is represented by a node where arcs come in and go out. The lines
symbolize in a certain manner the links of trigger chains. These figures try to reproduce
the intuitive notion of conceptual connectedness. [...] The difficulty is that it is not easy
to represent the complex interdependence of a large number of symbols through a couple of
lines connecting the dots. The other problem with this type of diagram, is that it is wrong
to think that a symbol is necessarily in service or out of service. If it is true of neurons, this
bistability will not reflect all of the neurons. From this point of view, the symbols are much
more complex that the neurons, which is hardly surprising since they are made up of many
neurons. The messages exchanged between the symbols are more complex than a simple I am
activated, which is about the content of the neuron messages. Each symbol can be stimulated
in many different ways, and the type of stimulation determines which other symbols will be
activated. [...] Let’s imagine that there are node configurations united by connections (that
can be in several colors in order to highlight the different types of conceptual connectedness)
accurately representing the simulation mode of symbols by other symbols. [Hofstadter 79]
To become emotional, behaviors must determine their responses, not only according
to external stimuli, but also according to internal emotions such as fear, satisfaction, love
49
Models
theories
Formal models
ion
tat
n
me
ple
im
digitization
programs
m
predictions
formalization
in abstracto reasoning
ate
ria
liz
ati
on
analogies
prototypes
Digital models
models
Real phenomena
in virtuo experiments
in vivo experiments
dig
ita
lc
rea
tio
n
in vitro experiments
similarities
n
tio
perception
impressions
behaviors
Analogical models
l
ica
a
cre
g
alo
an
imaginary
Perceptual models
in petto intuitions
field
modelization
Model
simulation
understanding
Figure 3.5: Modeling and Understanding Phenomena
or hate. We propose to describe such behaviors through fuzzy cognitive maps where these
internal states will be explicitly represented, and make it possible to clearly draw a line
between perception and sensation
3.4.1
Sensation and Perception
Fuzzy Cognitive Maps
Cognitive maps are taken from psychologists’ works that presented this concept to describe the complex behaviors of topological memorization in rats (cognitive maps [Tolman 48]).
The maps were then formalized as directed graphs and used in the theory of applied decision in economics [Axelrod 76]. Finally, they were combined with a fuzzy logic to become
fuzzy cognitive maps (FCM : Fuzzy Cognitive Maps [Kosko 86]). The use of these maps
was even considered for the overall modeling of a virtual world [Dickerson 94]. We propose here to delocalize fuzzy cognitive maps at every agent level in order to model their
autonomous perceptive behavior within a virtual universe.
Just like semantic networks [Sowa 91], cognitive maps are directed graphs where the
nodes are the concepts (Ci ) and the arcs are the influence lines (Lij ) between these
50
Individual Perception
concepts (Figure 3.6). An activation degree (ai ) is associated with each concept while
the weight Lij of an arc indicates a relationship of inhibition (Lij < 0) or stimulation
(Lij > 0) from concept Ci to concept Cj . The dynamics of the map are mathematically
calculated by normalized matrix product as specified by the formal definition of fuzzy
cognitive maps given in Figure 3.7.
a3
This map is made up of 4 concepts and has 7 arcs. Each concept Ci
has one degree of activation ai .
+1
−1
−1
−1
a1
a4
+1
−1
+2
a2
a1
a2
a=
a3 , L =
a4
0
0
−1
0
+2
−1
0
0
−1
0
0
+1
−1
+1
0
0
A zero in the line matrix Lij = 0 indicates the absence of a concept
arc Ci to concept Cj and a non null element of the diagonal Lii 6= 0
corresponds to a concept arc Ci on itself.
Figure 3.6: Example of a Cognitive Map
We use fuzzy cognitive maps to specify the behavior of an agent (graph structure),
and to control its movement (map dynamics). Thus, a fuzzy cognitive map has sensory
concepts whose activation degrees are obtained by the fuzzification of data coming from
the agent’s sensors. It has motor concepts whose activations are defuzzified to be sent to
the agent’s effectors. The intermediary concepts reflect the internal state of the agent and
intervene in the calculation of the map’s dynamics. The fuzzification and the defuzzification are reached according to the principles of fuzzy logic [Kosko 92]. Hence, a concept
represents a fuzzy subset, and its degree of activation represents the membership degree
in the fuzzy subset.
Feelings
As an example, we want to model an agent that is perceiving its distance from an
enemy. According to this distance and its fear, it will decide to flee or stay. The closer
the enemy, the more afraid it is, and in turn, the more afraid, the more rapidly it will
flee.
We are modeling this flight through the fuzzy cognitive map in Figure 3.8a. This
map has four concepts: two sensory concepts (enemy is near and enemy is far), a motor
concept (flee) and an internal concept (fear). Three lines indicate the influences between
the concepts: the proximity of the enemy stimulates fear (enemy is near → fear), fear
causes flight (fear → flee), and the enemy moving away inhibits fear (enemy is far → fear).
We chose the unforced (fa = 0) continuous mode (V = [0, 1], δ = 0, k = 5). The activation
of sensory concepts (enemy is near and enemy is far) is carried out by the fuzzification of
the sensor from the distance to the enemy (Figure 3.8c) while the defuzzification of the
desire to flee gives this agent a flight speed (Figure 3.8d).
51
Models
K indicates one of the rings ZZ or IR, by δ one of the numbers 0 or 1, by V one of the sets
∗.
{0, 1}, {−1, 0, 1}, or [−δ, 1]. Given (n, t0 ) ∈ IN 2 and k ∈ IR+
A fuzzy cognitive map F is a sextuplet (C, A, L, A, fa , R) where:
1. C = {C1 , · · · , Cn } is the set of n concepts forming the nodes of a graph.
2. A ⊂ C × C is the set of arcs (Ci , Cj ) directed from Ci to Cj .
C×C
→
K
3. L : is a function of C × C to K associating Lij to a pair of concepts (Ci , Cj ),
(Ci , Cj ) 7→ Lij
with Lij = 0 if (Ci , Cj ) ∈
/ A, or with Lij equal to the weight of the arc directed from Ci to Cj if
(Ci , Cj ) ∈ A. L(C × C) = (Lij ) ∈ K n×n is a matrix of Mn (K). It is the line matrix of the map F
that, to simplify, we observe L unless indicated otherwise.
C
→ V IN
4. A : is a function that at each concept Ci associates the sequence of its activation
Ci 7→
ai
degrees such as for t ∈ IN, ai (t) ∈ V given its activation degree at the moment t. We will observe
a(t) = [(ai (t))i∈[[1,n]] ]T the vector of activations at the moment t.
5. fa ∈ (IRn )IN a sequence of vectors of forced activations such as for i ∈ [[1, n]] and t ≥ t 0 , fai (t) given
the forced activation of concept Ci at the moment t.
6. R is a recurring relationship on t ≥ t0 between ai (t + 1), ai (t) and fai (t) for i ∈ [[1, n]] indicating the
map dynamics F.
∀i ∈ [[1, n]], ai (t0 ) = 0 ; ∀i ∈ [[1, n]], ∀t ≥ t0 , ai (t + 1) = σ ◦ g fai (t),
X
j∈[[1,n]]
Lji aj (t)
where g : IR2 → IR is a function of IR2 to IR, for example: g(x, y) = min(x, y) or max(x, y) or αx + βy,
and where σ : IR → V is a function of IR to the set of activation degrees V normalizing the activations
as follows:
continuous mode
1
k (1+δ)/2
binary mode
ternary mode
+1
0
(1−δ)/2
σ
σ
σ
(0,0.5,5)
(0,0.5,5)
−δ
a0
a
(1,0,5)
0
b
−1
(a) In continuous mode, V = [−δ, 1], σ is the sigmoid function of σ(δ,a0 ,k) centered in
slope k · 1+δ
in a0 and from limits in ±∞ respectively 1 and −δ :
2
σ(δ,a0 ,k)
IR
: a
→
7→
c
),
(a0 , 1−δ
2
from
[−δ; 1]
1+δ
−δ
1 + e−k(a−a0 )
0 if σ(0,0.5,k) (a) ≤ 0.5
(b) In binary mode, V = {0, 1}, σ : a 7→ .
1 if σ(0,0.5,k) (a) > 0.5
−1
if
σ(1,0,k) (a) ≤ −0.5
if
−0.5 < σ(1,0,k) (a) ≤ 0.5 .
(c) In ternary mode, V = {−1, 0, 1}, σ : a 7→ 0
1
if
σ(1,0,k) (a) > 0.5
Figure 3.7: Definition of a Fuzzy Cognitive Map
Perceptions
We make a distinction between feeling and perception: feeling results from sensors
only, perception is the feeling influenced by the internal state. A fuzzy cognitive map
52
ennemi is far
sensory map
+λ
0
0
L=
0
0
0
0
0
0
0
0
a
+1
0
+1
−1
0
0
ar
e
los
sc
flee
yi
em
en
fear
−1
1
sf
+1
+1
en
em
yi
enemy is close
degree of activation
Individual Perception
0
0
20
80
100
distance to the enemy c
+1
+1
fear
−1
enemy is far
−λ
flee
+γ
0
0
L=
+λ
0
0
0
−λ
0
+1
−1
+γ
0
0
0
b
+1
0
speed of flight
perception map
enemy is close
1
0
0
1
activation of flight
d
The sensory concepts in dashed lines are activated by the fuzzification of sensors. The motor concepts
in pointed lines activate the effectors by defuzzyfication. In (a), concept C1 = enemy is close stimulates
C3 = fear while C2 = enemy is far inhibits it and fear stimulates C4 = flee. This defines a map that is
purely sensory. In (b), the map is perceptive: the fear can be auto-maintained (memory) and even influence
feelings (perception). In (c), the fuzzification of the distance to an enemy results in two sensory concepts:
enemy is close and enemy is far. In (d), the defuzzification of flee linearly regulates the flight speed.
Figure 3.8: Flight Behavior
makes it possible to model perception thanks to lines existing between internal concepts
and sensory concepts. For example, we add three lines to the previous flight map (Figure
3.8b). One auto-stimulator line (γ ≥ 0) on fear models stress. A second stimulator line
(λ ≥ 0) of fear to enemy is close and an inhibitor line (−λ ≤ 0) from fear to enemy is
far model the phenomenon of paranoia. A given distance to the enemy will seem quite
long according to the activation of the fear concept. The agent then becomes perceptive
based on its degree of paranoia λ and its degree of stress γ (Figure 3.9).
a
b
The perception of the distance from an enemy can be influenced by fear: according to the proximity of
an enemy and fear, the dynamics of the map determine a speed obtained here in the 3rd cycle. In (a),
λ = γ = 0, the agent is purely sensory and its perception of the distance to the enemy does not depend on
its fear. In (b), λ = γ = 0.6, the agent is perceptive: its perception of the distance to an enemy is modified
by its fear.
Figure 3.9: Flight Speed
53
Models
3.4.2
Active Perception
Feeling of Movement
Neurophysiology associates the feeling of movement to the permanent and coordinated
fusion of different sensors such as myoarticular (or proprioceptive) sensors, vestibular
receptors and cutaneous receptors [Lestienne 95]. Among the myoarticular receptors,
certain muscle and articular sensors measure the relative movements of body segments
in relation to each other; other articular sensors act as force sensors and are involved
for the purpose of effort. The labyrinthic receptors are situated in the vestibular system
of the internal ear. This system is a real center of gravity and inertia that, on the one
hand, is sensitive to the angular accelerations of the head around three perpendicular axes
(semi-circular channels), and on the other hand, is sensitive to the linear accelerations
of the head, including that of gravity (otoliths). Cutaneous receptors measure pressure
variations and are responsible for tactile sense (touch).
Peripheral vision is also included in the feeling of movement because it provides information about relative movements within a visual scene. In certain circumstances, this
contribution to vision can conflict with data received from proprioceptive, vestibular and
tactile receptors; this conflict can result in postural disturbances that can be accompanied by nausea. It is what can happen when you read a book in a moving vehicle: vision,
absorbed with reading, tells the brain that the head is not moving, while the vestibular
receptors detect the movement of the vehicle; this contradiction is the main physiological
cause of motion sickness.
Internal Simulation of Movement
The feeling of movement thus corresponds to a multi-sensor fusion of different sensory
information. But this multi-sensory information is itself combined with signals coming
from the brain that control the motor control of muscles. According to Alain Berthoz,
neurophysiologist, perception is not only an interpretation of sensory messages, it is also
an internal simulation of action and an anticipation of the consequences of this simulated
action [Berthoz 97]. This is the case when a skier is going down a hill: he mentally goes
down the hill, predicting what will happen, at the same time that he is actually going
down the hill, intermittently checking the state of his sensors.
Internal simulation of movement is facilitated by a neuronal inhibition mechanism.
For vision, the brain is able to imagine eye movements without doing them thanks to
the action of inhibitor neurons that shut down the ocular muscles’ command circuit: by
staring at a point in front of you and focusing your attention, a sort of interior gaze if you
will, you really feel as if your gaze is traveling from one point of the room to another. This
virtual movement of the eye was simulated by the brain by activating the same neurons;
only the action of the motor neurons was inhibited.
Anticipating movement is well illustrated by the Kohnstamn illusion, named after the
physiologist who was the first to study this phenomenon. When a person is balancing
a tray with a bottle on it, the brain adapts to this situation in which the arm remains
motionless through a constant effort made by the muscles. If someone was to suddenly
54
Individual Perception
remove the bottle, the tray would spring up all by itself. In fact, the brain continues to
apply this force until the muscle sensors signal a lifting movement. If the person carrying
the tray took the bottle off himself, the tray would not move: the brain would have
anticipated the movement and signaled for the muscles to relax in response to the change
in the weight of the tray.
The brain can therefore be considered as a biological simulation that makes predictions
by using its memory and forming hypotheses about the phenomenon’s internal model. To
understand the role of the brain in perception, we need to begin with the goal pursued
by an organism and then study how the brain communicates with sensors by specifying
estimated values according to an internal simulation of anticipated consequences of action.
Perception of Self
An agent can also use a fuzzy cognitive map in an imaginary place and simulate a
behavior. Thus, with its own maps, it can reach a perception of itself by observing itself
in an imaginary domain. If it is familiar with the maps of another agent, it will have a
perception of the other and will be able to mimic a behavior or a cooperation. In fact, if an
agent has a fuzzy cognitive map, it can use it in simulation by forcing the values of sensory
concepts and by making motor concepts act in an imaginary place as in a projection of the
real world. Such an agent is able to predict its own behavior or that of a deciding agent,
according to this map, by going through several cycles of the map in his imaginary place
or by determining an attractor (fixed point or limited cycle in discrete mode) of the map.
Figure 3.10 illustrates this mechanism with the map from Figure 3.8b used for 20 cycles
in fuzzy mode and an initial stress vector accounting for ξ for t ≤ t 0 only on fear, then
null the rest of the time: fa (t < t0 ) = (0, 0, ξ, 0)T , fa (t > t0 ) = 0. With the same stress
conditions, the binary mode converges towards a fixed point: a(t > t f ) = (0, 1, 0, 0)T or
a(t > tf ) = (1, 0, 1, 1)T ; either the enemy is far and it is not scared and does not flee,
either the enemy is close, it is scared and flees.
In addition, if an agent is familiar with the cognitive maps of another agent, it can
anticipate the behavior of this agent by simulating a scenario created by going through
the cycles of their maps in an imaginary place. This technique opens the doors to a real
cooperation between agents, each one being able to represent the behavior of the other.
Hence, the fuzzy cognitive maps make it possible to specify the perceptive behavior
of an agent (graph structure), to control the agent’s movement (map dynamics) and to
simulate movement internally (simulation within the simulation). We can then implement
them within the framework of interactive fiction.
3.4.3
The Shepherd, His Dog and His Herd
The majority of works in virtual reality involve the multi-sensory immersion of the
user into virtual universes, but these universes, as realistic as they are, will lose credibility
as long as they are not populated with autonomous actors. The first works on modeling
virtual actors focused on the physical behavior of avatars [Magnenat-Thalmann 91], and
made it possible, for example, to study worksation ergonomics (Jack [Badler 93]). Another
55
Models
a
b
Simulation and fixed point of flight speed with λ = γ = 0, 6. In (a), the mode is continuous and this behavior
is obtained at the end of the map’s twentieth cycle. In (b), the mode is binary and the vector of activations
of the map stabilizes on a fixed point such as: either all of the concepts are at 1 except enemy is far, which
is at 0 and the speed is at 1, or the opposite and the speed is at 0. We can observe that if the agent is scared
at the time of the simulation, no matter what the distance from the enemy, it would perceive it as if he was
really close.
Figure 3.10: Perception of Self and Others
category of works dealt with the problem of interaction between a virtual actor and
a human operator: for example, companion agent (ALIVE [Maes 95]), training agent
(Steve [Rickel 99]) and animator agent [Noma 00]. Finally, others became interested in the
interaction between virtual actors; in such virtual theaters, the scenario can be controlled
at an overall level (Oz [Bates 92]), depend on behavior scripts (Improv [Perlin 95]) or be
based on a game of improvisations (Virtual Theater Project [Hayes-Roth 96]).
We propose here to use fuzzy cognitive maps to characterize believable agent roles
in interactive fictions. We will illustrate this through a story about a mountain pasture.
Once upon a time there was a shepherd, his dog and his herd of sheep . . . This example
has already been used as a metaphor of complex collective behavior within a group of
mobile robots (RoboShepherd [Schultz 96]), as an example of the watchdog robot of real
geese (Sheepdog Robot [Vaughan 00]), and as an example of improvisation scenes (Black
Sheep [Klesen 00]).
The shepherd moves around in his pasture and can talk to his dog and give him
information. He wants to round up his sheep in an area which he chooses based on needs.
By default, he stays seated: the shepherd is an avatar of a human actor that makes all of
the decisions in his place.
Each sheep can distinguish an enemy (a dog or a man) from another sheep and from
edible grass. It evaluates the distance and the relative direction (left or right) from an
agent that it sees in its field of vision. It knows how to identify the closest enemy. It
knows how to turn to the left or to the right and run without exceeding a certain maximum
speed. It has an energy reserve that it regenerates by eating and spends by running. By
default, it moves straight ahead and ends up wearing itself out. We want the sheep to
eat grass (random response), be afraid of dogs and humans when they are too close,
and in accordance with the Panurge instinct, to try and socialize. So, we chose a main
map (Figure 3.11) containing all of the sensory concepts (enemy is close, enemy is far,
high energy, low energy), motor concepts (eat, socialize, flee, run) and internal concepts
(satisfaction, fear). This map calculates the moving speed by defuzzification of the run
56
Conclusion
concept, and the direction of movement by defuzzification of the three eat, socialize and
flee concepts; each activation corresponds to a weight on the relative direction to be
followed respectively (to the left or to the right) to go towards the grass, join another
sheep or flee from an enemy.
1
+
−
friend right
enemy to the right
turn right
friend left
−
turn right
+
−
turn left
5
+
enemy in the middle
−
turn left
−
−
+
socialization
shepherd
9
9
enemy to the left
sheep c
flight
sheep a
5
enemy is close
1
run
enemy is far
−1
−1
+1
high energy
+1
9
5
+1
+1
+1
+1
fear
+1
satisfaction
flee
−1
low energy
−1
+1
perception
sheep b
+1
eat
socialize
b
1
In (a), the fuzzy cognitive maps define the role of a
sheep. The two maps at the top determine its direction
towards a goal; the one at the bottom gives the sheep
its character. In (b), the sheep try to herd together
while avoiding the shepherd. The 3 sheep act by leaving
a dated footprints on their paths.
Figure 3.11: Fear and Socialization
The dog is able to identify humans, sheep, the pasture area and the watchpoint. It
distinguishes by sight and sound its shepherd from another man and knows how to spot
the sheep that is the farthest away from the area among a group of sheep. It knows how
to turn to the left and to the right and run up to a maximum speed. Initially young and
full of energy, its behavior consists of running after the sheep, which quickly scatters the
sheep (Figure 3.12a). First, the shepherd wants the dog to obey the order to not move,
which will lead the sheep to socialize. This is done by giving the dog a sensory map to
the shepherd’s message, which inhibits the dog’s desire to run (Figure 3.12b). The dog’s
behavior is decided by the map and the dog stays still when its master asks it to (message
dissemination stop). Then, the shepherd gives the dog a structured map based on the
concepts associated with the herd’s area, for example whether a sheep is either outside
or inside the area. The concepts also make it possible for the dog to bring a sheep back
(Figure 3.12c,d,e) and keep the herd in the area by situating itself at the watchpoint, in
other words, on the perimeter and across from the shepherd. It is remarkable to observe
that, the path in S of the virtual sheepdog emerging from this in virtuo simulation (Figure
3.12c), and not explicitly programmed, is an adapted strategy that is observable in vivo
by a real sheepdog that herds a group of sheep.
3.5
Conclusion
The models that we develop in virtual reality must follow the principle of autonomy
of the virtual objects that make up virtual universes.
57
Models
1
shepherd
4
4
1
1
4
7
stop
sheep a
sheep b
1
watchpoint
−1
7
dog
7
4
run
7
sheep c
a
b
In (a), the shepherd is immobile. A circular area represents the grazing area. A watchpoint is attached
to the area that is always located diagonally across from the shepherd: this is where the sheepdog must
be when all of the sheep are inside the area. Initially, the behavior of a dog without the fuzzy cognitive
map is to run after the sheep, which quickly scatter and go outside the gathering area. In (b), the simple
map depicts the obedience of a dog to the message stop of its shepherd by inhibiting the desire to run.
stop
1
shepherd
sheep to the left
sheep to the right
sheep a
13
13
+2
9
5
sheep b
1
−1
+2
turn right
+1
+1
−1
sheep c
area to the left
9
5
1
−2
turn left
13
1 5
9
5
−2
13
return
dog
9
area to the right
watchpoint
e
c
sheep is far
+1
−3
run
−1
sheep is close
+1
+1
−3
+1
−1
−1
−1
follow slowly
sheep close to area
−1
−1
+1
−1
watchpoint is close
sheep in
−1
+1
sheep out
−1
large area
dog
+1
+1
−1
+1
−1
guard
return
+1
sheep far from area
wait
+1
+1
For this in virtuo simulation, the socialization of sheep is inhibitied. In
(c), the film of the sheepdog bringing
back 3 sheep unfolds with their maps
of Figure 3.11 for the sheep and the
maps represented in (d) and (e) for the
dog. In (d), the main map of the dog
depicts the role consisting of bringing
back a sheep to the area and maintaining a position near the shepherd when
all of the sheep are in the desired area.
In (e), the map decides the approach
angle of the sheep to bring it back to
the area: go towards the sheep, but approach it from the opposite side of the
area.
stop
watchpoint is far
+1
small area
d
Figure 3.12: The Sheepdog and Its Sheep
We have chosen to design these virtual worlds as multi-agent systems that consider
the human user as a participant, and not simply as an observer. The evolution of these
systems gives rise to multi-agent simulations in which the human user fully participates by
playing his role as a spectator, an actor and a creator. Participatory simulation of virtual
universes composed of autonomous entities then allows us to first research phenomena
that have been made complex by the diversity of components, structures, interactions,
58
Conclusion
and modeling. Entities are modeled by autonomizing all behavioral models, regardless of
whether they are reactive, emotional, intentional, developing or social. The behavior of
all of these entities is then indicative of the system’s auto-organization capabilities and
is one of the expected results of such simulations. By participating in these simulations,
the human operator can, according to a principle of substitution, take control of the
entity. Thus, by a sort of digital empathy, he can identify with the entity. In return, the
controlled entity can learn behaviors that are more adapted to its environment, from the
human operator.
The study of blood coagulation through this multi-agent approach implemented several thousand autonomous entities with reactive behavior. The simulations highlight
the auto-organizing capabilities of cells and blood proteins to fill in a hole created by the
user-troublemaker, anytime and anywhere in the virtual vein. The user-observer, upon observing a coagulation abnormality (hemophelia or thrombosis), becomes user-pharmacist
by adding new molecules during the simulation, thus allowing the user-biologist to test
new reactives, at a lower cost and in total safety. Within this kind of virtual laboratory, the biologist conducts real in virtuo experiments that involve a real-life experience,
which is not the only interpretation of in silico calculations. The biologist then has a new
experiment process for understanding biological phenomena.
Writing an interactive fiction by modeling the behaviors of virtual actors with the
help of fuzzy cognitive maps, demonstrates the triple role of these maps. First, they allow
you to specify a perceptive behavior by defining sensory, motor and emotional concepts
as nodes on a directed graph; the arches between these nodes translate the exciter or
inhibitor relationships between the associated concepts. They then allow you to control
the movement of actors during execution, thanks to their dynamics. Finally, they open the
way to an active perception when they are simulated by the agents themselves. Simulation
in simulation, this active perception, like a kind of inner gaze, takes place through an
auto-evaluation of the agent’s behavior and is an outline of self-perception.
In the end, these autonomous behavior models must be implemented and executed on
a computer. Therefore, we must ask ourselves about the language that is used and the
associated simulator. This is what we will discuss next in Chapter 4.
59
Models
60
Chapter 4
Tools
4.1
Introduction
This chapter introduces the generic tools that we developed in order to implement the
models that were presented in Chapter 3. The tools are based on an oRis 1 development
platform (language + simulator), which was designed to meet the needs of our participatory multi-agent simulation and is the core of ARéVi, our distributed virtual reality
workshop .
The first part of this chapter will focus on the oRis language, a generic object-based
concurrent programming language that is dynamically interpreted and has an instance
granularity.
Next, we will concentrate specifically on the associated simulator in order to demonstrate that it was built so that the activation procedure for autonomous entities does not
lead to bias in the simulation, for which the execution platform would be solely responsible.
In this environment, execution error robustness will prove to be a particularly significant
point, especially since the simulator provides access to the entire on-line language.
Finally, we will reconstruct our oRis platform in the context of participatory multiagent simulation environments and explain which ones are the first applications.
4.2
The oRis Language
When to say something is to do something. [Austin 62]
In order for an operator to easily play those three roles (spectator, actor, creator), while
respecting the autonomy of the agents with whom he cohabits, he must have a language
to respond to other models, modify them, and eventually create new classes of models
and instantiate them. Therefore, while the simulation is running, he has the same power
of expression as the creator of the initial model. The language must then be dynamically
1
The name oRis was freely inspired from the Latin suffix oris, which is found in the plural form of
words that are related to language such as cantatio → to sing, cantoris → singers (those who sing), or
oratio → speech, oratoris → speakers (those who speak). We kept the triple symbolism of language, the
plural and the act: multi-agent language.
61
Tools
interpreted: new code must be able to be introduced (and therefore interpreted) during
execution. The user may end up only interacting with or modifying an instance of a
model. So, the language must have an instance granularity, which requires a naming
service and most importantly, the ability to syntactically distinguish the overall context
code from the class and instance codes.
It seems then that participatory multi-agent simulation needs to have an implementation language that incorporates, of course, the paradigm of object-based concurrent programming (the interest of this paradigm for multi-agent systems is not new [Gasser 92])
but that also provides additional properties such as a dynamically interpreted language
with an instance granularity.
4.2.1
General View
oRis is an interpreted, object-oriented language with strong typing. It also has many
similarities with C++ languages and Java, which makes learning it easier. Just like these
general-purpose languages, oRis makes it possible to tackle different application-oriented
topics; if you integrate components developed with these languages, the applications’
logical framework architecture remains homogeneous, which facilitates the reusability of
third-party components and the extensibility of the oRis platform. Figure 4.1 provides
an illustration of a very basic, yet complete, program, which defines a class and launches
several processes (start blocks). We can see that the class describes active objects, whose
behaviors run at the same time as other processes that are initiated in other local contexts.
oRis generates a garbage collector. It is therefore possible to choose which instances
are to be destroyed automatically (however, this decision can be revoked dynamically);
the explicit destruction of an instance by the operator delete is always possible. Consequently, there is a mechanism that lets you know if a reference is still valid.
We have an interface with C++ language according to a libraries procedure that can
be downloaded on request. Not only does oRis make it possible to combine functions or
methods with an implementation in C++, but it allows further use of the object approach
by directly creating a language instance that is interpreted by an instance described in
C++. Although oRis is implemented in C++, it is possible to integrate work that was done
in other languages. A package ensures that oRis code can interface with SWI-Prolog 2
code: a function that makes it possible to call upon the Prolog interpreter in oRis and
an oRis method in Prolog. oRis was also interfaced with Java in such a way that any
Java class, whether it is part of standard classes or the subject of a personal project,
can be used directly from oRis. There is no need to format the script language’s calling
conventions, as is often the case to load from C++.
Many standard packages are provided in oRis. These mainly include services that
are completely routine, but are nevertheless essential, so that the tool can really be
used in different conditions. These packages manage topics such as data conversion,
mathematics, mutual exclusion semaphores, reflex connections, generic containers, graphic
interface components, curve plotting, objects situated in a plane or in space, reflection
2
62
SWI-Prolog: www.swi.psy.uva.nl/projects/SWI-Prolog
The oRis Language
class MyClass
{
string _txt;
void new(string txt) { _txt=txt; }
void delete(void)
{}
void main(void)
{ println("I say: ",_txt); }
}
start
{
println("---MyClass i=new
MyClass j=new
println("---}
// define the class MyClass
//
//
//
//
an attribute
constructor
destructor
behavior
// start a process
//
in the application
block start ----");
MyClass("Hello");
MyClass("World");
block end ----");
start
{
for(int i=0;i<100;i++)
{
println("doing something else !");
yield();
}
}
// create active objects
// start another
//
process
doing something else !
---- block start ------- block end ---I say: World
doing something else !
I say: Hello
I say: Hello
I say: World
doing something else !
I say: Hello
I say: World
doing something else !
...
Figure 4.1: Execution of a Simple Program in oRis
tools, graphics inspectors, file communication, socket, IPC, remote-control of sessions,
interfacing with Java and Prolog and fuzzy controllers.
Another aspect of oRis is the user’s graphics environment. When oRis is launched, it
displays by default a graphics console that makes it possible to load, launch and suspend
applications. As shown in Figure 4.2, the console (upper left) offers a text editor corresponding to one of the many ways to work on-line. The other windows presented here are
simple application objects: a view to the 3D world, a curve plotter and inspectors. These
objects, like all the others, are directly accessible from the console, but can of course, be
created and controlled by the program.
63
Tools
Figure 4.2: oRis Graphics Environment
4.2.2
Objects and Agents
Diverse Entities
oRis agents are characterized by properties (attributes), know-how (methods), declarative knowledge (Prolog clauses), a message box, activities (execution flows) and objectives. The associated mechanisms are available of course: look-up and modification of
attributes, method calls, inference engine, message look-up and delivery, description and
the activation mode for execution flows, as well as objective evaluation 3
But, a multi-agent system is not just made up of agents. Thus, the agent environment
is made up of passive objects (that do nothing as long as no one interacts with them)
and active objects (that act without being called upon and generate events). Agents
create, modify and also destroy objects such as messages and knowledge that – even if,
at a certain level can be designed like agents – are implemented at the terminal level like
objects.
Likewise, not all interactions between entities take place as either language acts interpreted asynchronously by their recipient, or as a trace in the environment. Agents
perceive the characteristics of objects or other agents (the geographical position of an
agent, the content of a message, etc.), and then an attribute of the considered object is
read. Symmetrically, modifying the environment is like modifying an attribute. In addition, an agent’s behavior can result from an unintentional action. Let us imagine two
people who are talking in the street and are being bumped into by passersby: they each
3
Currently, an external Prolog interpreter is responsible for managing declarative knowledge, objective
management is being outsourced to a new application.
64
The oRis Language
make decisions about the messages that they are exchanging based on their own interpretations, but they endure being jostled by passersby, even though it is not their intention
or decision (when we are being bumped into, we don’t have a choice!). As a result, in a
similar application, it is possible to use entities of a different nature:
. objects and active objects (according to UML semantics [Rumbaugh 99]);
. actors such as [Hewitt 73] suggests: an actor is an active entity that plays a role by
responding according to a script; his behavior is expressed by message deliveries;
. agents that we can define as autonomous entities (there is no a contrario script of
the actor model) and whose behavior is based on the perception of the environment
and actions being performed; the actions may be guided by (pro-active) goals.
Fundamentally, implementing these different types of entities is based on the notion
of object (a class instance), concurrence and autonomy, which at this level is founded
on the notion of active object (an object that has its own execution flows). The proactivity of agents is based on the notion of goals, knowledge, action plans and an inference
mechanism. Deductive logic programming responds well to this need; with oRis, these
elements can be implemented in Prolog with which it is interfaced.
Space Environment
Whether it is because the simulated system has its own geometric element, or because
it is a metaphor that improves the intelligibility of the model, multi-agent simulation may
need to turn to agents placed (locatable and detectable) in a two or three-dimensional
geometric environment. Introducing physical dimensions to an environment improves the
semantics of the fundamental notions of multi-agent systems such as:
. location: it is no longer just logic (i.e. conditioned by the organization of multiagent systems), but a space proximity (distance between two objects) or topological
proximity (one object is in another, an object is attached to another);
. perception: it is no longer based solely on the identification of an instance and its
attributes (applied semantics), but also on the space locality or the detection of 2D
contours or 3D faces; the agents must therefore have the appropriate sensors with
a certain field of perception;
. interactions: we can always talk about notions such as collision, object attachment,
etc.
oRis offers 2D and 3D environments in which objects are defined by their position,
their direction and their geometric shape. The functionalities of such objects are very
similar. Whether they are Object2d or Object3d, oRis obviously offers tools that allow
the user to immerse himself in the multi-agent system. The user, like any agent, has a local
view of the environment, can be perceived by other agents, and can interact physically
with them by using the right device to produce events that they can perceive.
Figure 4.3 illustrates object tracking in the plane and the notion of the field of perception. The methods of perception based on this principle include objects placed at points
(origin of their local reference point). The field of perception is made up of an aperture
angle and a range. In this area, the tracked objects are localized by polar coordinates on
the local reference point of the entity that perceives, in order to respond easily to local
65
Tools
perception (on my left, go straight ahead). These methods of perception enable you to
choose the type of objects to be perceived and allow two variants: detection of the closest
object or of all of the visible objects. A method of shooting rays subtly detects the contours or faces of these objects whose shape is chosen among a group of primitives. Objects
can be hierarchically interconnected so that moving one object moves every object that
is connected to it.
field of perception
Y−axis
orientation
placed objects
X−axis
global reference point
Figure 4.3: Localization and Perception in 2D
To go beyond plane problems, oRis offers a tridimensional geometric environment.
It reuses the same principles as those of a bidimensional environment, however it adds
additional geometric parameters (six degrees of freedom). The geometric representation
(in OpenGL4 ) of an entity is described as a set of points that defines faces, which can be
either colored or textured. Basic volumes and transformation primitives are also available;
thus, a complex shape is constructed by accumulating other shapes.
Time Environment
Time is also a variable of the environment that conditions the behavior of agents and
enables them to place their actions and interactions. oRis offers basic functionalities to
do this.
Different time management modes traditionally used in simulation are based on the
lowest level, either on logical time (event-driven), or physical time [Fujimoto 98]. For
this reason, oRis provides two ways to measure time. The getTime() function measures
physical time periods in milliseconds to provide the user, for example, with a feeling of
real-time. The getClock() function indicates the number of execution cycles that have
gone by, which can be integrated into logical time. Notice that the meaning that can
be given to this value depends on the type of multi-tasks used, and more generally, the
simulation’s application context.
4
66
OpenGL: www.opengl.org
The oRis Language
User Environment
In order for the user to be more than just a spectator during the development of a
multi-agent system, he needs to have a language with dynamic properties so that new
code portions can be included at any time and under various circumstances. The easiest
way to intervene is to launch new processes to change the multi-agent system’s natural
course of development. Incremental construction of the system needs to be able to finish the application that is running by adding new notions to it, and in particular, new
classes. Modifications can also just involve isolated instances. All of these on-line modifications allow the user to see the models that he is using as being application parameters
themselves.
To make these operations run smoothly, the language’s grammar must make it easy
to tell if a change involves the overall context of the application, or that of a class or an
instance. This contextual information provides the user with an interpreter with a unique
entrance point, which is able to be fed by a frame whose origin imports little (network,
input field, file, automatic generation, etc.). Thus, expressing these changes does not
force the user to use a specific tool (a graphics inspector, for example), even if such tools
are available. On the contrary, it allows the completed application to provide access to
the dynamic functionalities that work the best.
Our objective to provide on-line modifications is motivated by the fact that, in a
virtual reality, life must go on no matter what. So, no matter what happens, and no
matter what the user was able to do, we want the model to continue to run even if errors
occur. It is therefore necessary to reduce the potential number of errors in the application
to a maximum. This point completely justifies choosing a strong typing. In fact, if the
type controls take place as soon as the code is introduced, inconsistencies can be detected,
which makes it possible to reject the code in question before it can cause real errors in
the application. Many other errors can arise during execution and they must not in any
way cause the application to quit. To avoid having to interrupt the entire application in
order to de-bug it, we would rather just interrupt the activity that is producing the error.
Thus, the other processes will continue to run in parallel while the user takes care of the
faulty process.
To make it easier for the user to interact with the application and its components, oRis
offers several simple, immediately usable mechanisms that are related to introspection.
Other more routine reflection services are part of a specific package. The first way to
make interaction easier involves naming objects and expressing references. Each instance
that is created automatically receives a name, which the user can read, that makes it
unique. It is made up of the name of the created object class, followed by a period and an
integer to distinguish it from the instances in the same class (MyClass.5, for example).
This name is not just a simple, useful functionality, but is an important part of the
language’s lexical conventions; it is a reference type constant.
A group of services enables you to become familiar with all of the existing classes or
instances of a particular class. It is also possible to ask an object its class, or even to
verify if it is an instance of a particular class or a derived class.
67
Tools
4.2.3
Interactions between Agents
The interactions between an agent and its environment take place through the manipulation of the structural characteristics of the objects that compose it. In turn, the
attributes of the targeted objects are read and modified; modifications can be made by
calling a method (e.g. calling the move() method of an object that is going to collide
with a placed agent). These interactions can also instantiate objects or destroy them
(the operators new and delete are called). In any case, the reaction of the environment’s
objects is imperative and synchronous.
Interactions between agents can be reactive (example of collision) or mediate (exchanging of messages). For this reason, the oRis language offers four solutions that can
be used simultaneously while implementing an agent. These services are ensured by the
Object class and correspond respectively to synchronous method calls, reflex connections,
point-to-point message delivery and broadcasting messages into the universe of agents.
Synchronous Calls
Even though the object model uses the concept of message delivery to describe communication between objects, in reality, languages use a synchronous method call mechanism.
Calling a method on an object does not create a new execution flow for the related instance
but simply turns the activity of the called object towards a process that manipulates the
data of the designated object. In other words, it is the calling object that does all of the
calculations; the concerned object only provides data and its operating procedures, but
does not actively participate in the process.
The advantage of imperative programming through synchronous calls is that its execution is efficient and it ensures that actions are sequenced: the code that follows the call
can count on the fact that the requested process was definitely completed since it was
carried out by the same execution flow. Some agent behaviors, such as the perception of
others, can be effectively programmed this way. Let us also note that calling a method
can take place in a start{} block and hence be run by an activity flow that is different
from the calling flow.
As indicated in UML, C++ and Java languages provide a way to specify access control
(public, protected or private) to the attributes and methods of the object classes. For
efficiency reasons, this control takes place during compilation. With agents however, this
semantic is not relevant. In fact, two instances from the same class can make changes
directly on their mutual private parties, which violates the principle of autonomy if the
target is a method that should only be executed under the control of the agent for which
it was meant.
In order to respond to this situation, oRis provides a way to restrict access to methods
(or even to functions). While this indicates the object on which the present method
is called, the keyword that indicates the object that called this method on the object
indicated by this. By verifying the identity of that at the beginning of the method, it
is possible to control the access to the related service. Thus, we can simply verify that
certain services can only be directly accessed by the related entities. The verifications
68
The oRis Language
can of course focus on the standard class-based schema, but can also be based upon
modalities that are specific to the application and variable conditions in time. Notice that
this method is equivalent to the sender field of an asynchronous message, which allows
a receiving agent to maintain the same reasonings no matter what the communication
medium (synchronous method call or asynchronous message delivery). The access to
that is internally based on the fact that it is possible in oRis to inspect the execution
stack of the running process. We can, for example, only invoke one method from several
others.
Even though execution speed may be slower, these dynamic access controls open up
several possibilities in multi-agent system programming. Interaction rules can be established within the organizational structures and made dynamic. Hence, it is possible to
give an agent the means to refuse the execution of a service because the requestor has no
authority over it in the organizational structure or because the service could have been
requested, even indirectly, by an agent that had been uncooperative during a previous
request.
Reflex Connections
oRis provides an event-driven programming method called reflex connections, which
allows an agent to react to the modification of an object attribute that is either one of
its own attributes (internal state of the agent, mailbox, etc.) or the attribute of another
object (case of an agent that watches an environment object or a perceptible characteristic
of an other agent).
A reflex connection is an object assigned to an attribute and each time this attribute
is modified, some of the object’s methods are automatically launched. Thus, when an
attribute that is associated with a reflex connection is modified, the connection is activated, which automatically launches the execution of its before() method, just before
the modification takes place, and of its after() method, just after it takes place. It is
possible to associate any number of reflex connections with the same attribute of the same
instance.
This mechanism can be compared to a communication method (in a reactive context)
since process launching that signals the modification of an attribute can be seen as a way
to stay informed about an event.
Communication by Messages
In accordance with the object model that is conceptually based on the interaction
between instances by message delivery, this mechanism is defined in oRis in the basic
Object class. It also uses the services of the basic Message class, either for sending
point-to-point messages, or for broadcasting messages.
To send a point-to-point message, an agent instantiates a class object derived from
Message (whose only attribute is the reference of the sender who is determined through
the keyword that). Calling its sendTo() method places it in the mailbox (FIFO firstin-first-out queue) of the recipient whose reference is specified in argument. In order for
the recipient to process the received message, it must consult its mailbox. The recipient
69
Tools
can be informed that a message was delivered if the reflex method was automatically
launched. For example, it is possible to relaunch the execution of the message reading
activity that could have been suspended in order to avoid any active wait state. The
type of communication explained here is described point-to-point in the sense that the
sender sends its message to a recipient that it knows and has explicitly designated. This
procedure is asynchronous since the sender pursues its activities without knowing exactly
when its message will be processed by the recipient.
By extending the object model, an agent can simultaneously send a message to objects
(agents) that it does not know; once the message is received, the objects will launch some
sort of action that is unknown to the sender. The messages used are completely identical
to those mentioned previously. The difference is how they are sent and received. This
time the sender calls the broadcast() method of the message so that it is sent to all of
the instances that can be involved. They are determined through the setSensitivity()
method of the Object class. Calling this method requires two arguments that respectively
indicate the type of broadcast message to which the instance becomes sensitive (the class
name), and the name of the method that must be launched when a message like that is
received (e.g. setSensitivity("WhoAreYou", "presentMySelf")). When a message is
broadcast, the method that was specified (presentMySelf) is immediately called on each
object that is sensitive to this type of message (WhoAreYou class). This reflex mechanism
works well with messages that are seen as events and that need an immediate reaction
from the objects that intercept it. However, if this is not the case, the messages received
by broadcast can also be placed in the recipient’s mailbox and therefore be processed as
if they were sent point-to-point, which homogenizes the reaction mode of the receiving
agent. The same object can be sensitive to several types of messages and there is a
mechanism that recalls the follow-up of polymorphic links involving the choice of reflex
methods to be launched. The sensitivity to the different types of messages can evolve in
time, in order to change reaction or to become desensitized to certain types of messages.
4.3
The oRis Simulator
Each computer program is a model, created by the mind, from a real or imaginary process.
These processes, which are born from man’s experiences and thoughts, are innumerable and
complex in their details. At any moment, they may only be partially understood. It is only
rarely that they are modeled to satisfaction in our computer programs. Although our programs are sets of symbols crafted with care, mosaics of criss-crossed functions, they never
stop evolving. We modify them gradually as our perception of the model deepens, broadens
and becomes widespread, until an equilibrium is reached which can reach the boundaries of
another possible modeling of a problem. The drunken joy that accompanies computer programming stems from the continual round-trips between the human mind and the computer,
mechanisms expressed by programs and the explosion of the new visions that they bring. If
art translates our dreams, computers fulfill them as programs! [Abelson 85]
Multi-agent simulation requires several models to be executed in parallel. Therefore,
we must make sure that the activation procedure of these autonomous entities does not
lead to a bias that would result in a global state for which the execution platform would
be responsible: it is imperative that only the algorithmic procedures described in agent
behaviors explain the model’s global state.
70
The oRis Simulator
Controlling the scheduling procedure for agent behaviors – implemented in oRis as
active objects – is therefore very important to our project. Experience shows that, as a
general rule, very few things are known or guaranteed by the multi-task services provided
in different programming environments. This lack of information leaves a certain doubt
regarding the fairness of these procedures and the bias risks that they could incur. Here,
we present the scheduling procedure that is used in oRis so that the user knows exactly
what he can expect when he decides to develop agents in parallel.
4.3.1
Execution Flows
The first topic to be tackled in regards to multi-tasks is how to determine the nature
of the tasks that we would like to execute in parallel. Although they are managed the
same way internally, the oRis language provides three ways to express these tasks, which
we will call execution flows.
In oRis, an active object has a main() method that represents the entrance point of
the instance behavior in question. When an instance equipped with a main() method is
created, this method is immediately ready to be executed. When the end of the method
is reached, it is automatically relaunched to the beginning. This is therefore a simple
way to implement an agent’s autonomous behavior. A multi-agent simulation in oRis
instantiates these agents and lets them live.
Another way to start a new in parallel process for active objects is to split the execution
flow by using the start primitive (Figure 4.4). This procedure, which generates several
execution flows from one flow, is mainly used to assign several activities to one agent.
It is very conceivable then that the main behavior of an agent (its main() method) will
generate other supplementary activities. It seems reasonable that an agent could, for
example, move around while communicating with others.
de
start primitive
2
co
de
3
co
de
co
{
/* code 1 */
start
{
/* code 2 */
}
/* code 3 */
}
1
Function or method code
Figure 4.4: Splitting an Execution Flow in oRis
The two previous types of activities are clearly intended for writing agent behaviors.
The third type that we will introduce here is intended more specifically for user changes,
however, it can also be applied in numerous other circumstances. We will reuse the
keyword start, but this time it will be outside of any code block. It lets you write out a
code block whose execution starts right after the interpreter analyzes it. An example of
using such a code block has already been given in Figure 4.1. In particular, code block
can be used to display new instances or to begin processes that execute in parallel with
all of the application’s activities. A block like start can be considered as an anonymous
71
Tools
function, in other words, it hosts local variables and its code does not involve any instance
in particular. It is possible, however, to think of it as an anonymous method by having
the name of the instance precede it; the code executed in the start block concerns this
particular object as if it was one of its methods. Such a possibility, when combined with
dynamic language properties, allows the user to launch a process during application by
pretending to be a specific instance; in other words, this object temporarily becomes a
user’s avatar.
No matter which one of these three methods is used to create execution flows, the
scheduler manages them all the same way. Execution flows are all indicated by a complete
identifier and can be suspended, relaunched or destroyed. Particular care was taken
with processing errors that could occur during execution. Because errors mainly involve
unforeseen circumstances (division by zero, access to an instance that was destroyed, etc.)
we did not try to provide a catch mechanism for anticipated errors, like the try/catch,
which indicates when there is an error, but is not really able to correct it. In oRis, an error
during the execution of an activity destroys this flow and itself (as well as the message error
display and the execution stack). This solution, which only stops the activity involved
in generating an error, makes it possible to design a universe that continues to run even
when errors are produced locally (life goes on no matter what).
4.3.2
Activation Procedures
The oRis scheduler maintains several execution flows. Figure 4.5 diagrams the data
structures used to manage these parallel processes. Each execution flow is represented
by a data structure which, in addition to the activity’s identifier, contains a context
stack and a temporary values stack. The context stack is used to embody function and
method nested calls. The executable modules are represented by a sequence of microinstructions. Since these are atomic, activities can only be switched between different
flows through the execution of two micro-instructions. The temporary data stack is shared
by all of the execution flow contexts and makes it possible to stack the parameters of an
executable module before its call, and to retrieve the stacked result when the called module
is terminated.
oRis virtual machine
All execution flows
Contexts stacks
Executable module (function, method...)
Micro−instructions sequence
Local variables
Instructions counter
Temporary values stack
Figure 4.5: Structure of the oRis Virtual Machine
72
The oRis Simulator
Three Multi-Task Modes
We know now that different execution flows can be interrupted between two of their
micro-instructions. Therefore, we will try to explain what causes one execution flow
to be interrupted for another. This leads to three types of multi-tasking (cooperative,
preemptive and deeply parallel) that the user can choose freely in his program.
One possibility is to place the scheduler in cooperative mode. In these conditions, activity cannot change spontaneously but instead, relies exclusively on explicit instructions
written in the code to be executed (yield() function, end of main(), resource wait state).
The scheduler can then interrupt the current process in order to continue the execution
of another process. An activity that allows the other activities to run will restart its
processes when all of the others have done the same. This type of multi-tasking requires
that all of the application’s activities play the game, in other words, that they regularly
try and make way for the others. If one of them monopolizes the scheduler’s activity, it
freezes the rest of the application.
It is also possible to choose a preemptive schedule by specifying in milliseconds when
a switch must take place. In these conditions, long behaviors can be written without worrying about the minimum number of interleaves an activity must have. The programmer
no longer needs to insert explicit switches (yield()) since the scheduler makes sure that
they take place spontaneously.
A more precise switching control is possible this time by specifying a number of microinstructions as a preemption interval. The notions of parallelism and intermingling are
therefore pushed to their limits when this number equals 1, which justifies choosing the
term deeply parallel. This process is significant because it provides a parallelism that goes
well beyond what the other two modes offer. Even though the other two modes reduce
the amount of time, they are nevertheless based on code portions that are executed in
generally long sequences. With this new scheduling mode we can greatly decrease the
length of these sequences in the application, which makes the execution closer to a real
parallelism (even if this is not the case). The other consideration of this possibility is that
the scheduler can spend more time scheduling switches than on the application’s useful
code. Notice that this type of multi-tasking (just like the cooperative mode) does not
depend on any kind of system clock and is only the expression of an internal algorithm
in oRis. Consequently, it is perfectly mastered.
Two Designation Modes
The last topic to be discussed in regards to activation process is how to choose which
flow to activate after a switch. Introducing priorities would only push back the problem
for the flows with the same priority and would be very difficult for us to interpret in
regards to autonomous entities. It seemed in fact, better to accept that various entities
do not use time in the same way, than to say that some live longer than others 5 . In order
to ensure that time is shared equally between different activities, we present the notion
of execution cycle, which adds the following property to the system: each execution flow
5
Does the hare live more often than the tortoise or does he simply run faster ?
73
Tools
goes around one single time per cycle. If new activities appear during the cycle; they
are executed in the next cycle to make sure that cycles end. We suggest two activation
orders: a fixed order and a random one.
The most immediate solution consists of putting the activities in an unchanged order,
which leads to an undesirable precedence relationship since the activity in the n − 1 range
is always chosen before the activity in the n range. Thus, if two agents are regularly
competing for one resource, the agent whose behavior is in the n − 1 range systematically
takes the resource, to the detriment of the agent in the higher n range 6 . Notice that this
parasite priority only involves activities from a local perspective and not from an overall
perspective. Indeed, on a microscopic level, the last activity of a cycle precedes the first
activity of the following cycle (from this point of view, changing cycles is a remarkable
date). This priority relationship only exists between activities that are relatively close in
order in the designation process.
Parallelism simulation requires that code portions of different flows be executed sequentially as this local priority relationship cannot be completely eliminated. However,
by presenting a random order in this designation, we avoid its systematic reproduction
from cycle to cycle since the advantage of one activity over another during a cycle is questionable during another cycle. We therefore retain the notion of execution cycle, which
provides us with a common logic time that prevents any deviation in time in favor of an
activity, while eliminating the bias introduced by local precedence relationships.
Besides the three available scheduling modes (cooperative, preemptive and deeply
parallel), the user can also choose between a fixed order designation (strongly advised
against) and another random designation that makes it possible to eliminate any local
precedence relationship that could lead to bias. Notice that the process used is based
solely on algorithms and data structures over which we have complete control and which
are explained in detail in [Harrouet 00]. It does not depend on any particular functionality
of the underlying system (with the exception of the preemptive mode clock).
In regards to concurrent access to common resources, oRis offers very standard mutual
exclusion semaphores. In particular, it lets you choose whether the unlocking operation
must take place according to the usual queue or according to a random order to avoid
new local precedence relationships. We also have another solution to ensure this type of
service. It involves creating low-level critical sections (execute block similar to start)
that do not allow the scheduler to execute any switches within the framed code portion.
One last detail about parallelism involves the use of a system thread to encapsulate
the invocation of blocking system calls. Indeed, the oRis scheduler is seen as a unique
process from the point of view of an operating system and is thus likely to be suspended
in its globality. When a potentially blocking process must be carried out, it is launched
in a thread whose termination is regularly verified by the oRis scheduler.
6
What would the immunologist from Chapter 3 conclude if the antibodies always took the resource
away from the antigen due to a bias in the simulator?
74
The oRis Simulator
4.3.3
The Dynamicity of a Multi-Agent System
There seems to be many ways to introduce oRis code during execution (composing
a character string, entry window, reading a file, reading a network screen, etc.), but
everything converges towards one single entry point; the parse() function. This function
transmits the string argument to the interpreter, making all of the language available online. Indeed, the same process is used for initial loading and on-line actions. Therefore,
it is possible to create agents that, with a learning mechanism, could autonomously make
their behavior evolve.
Launching Processes
The first way to change a program during execution is to launch new processes. These
enable you to create new agents, inspect the model, interact with agents or destroy them.
Such lines of code are found in an anonymous block like start or execute. As we
have previously seen, these blocks of code are managed by the scheduler once they are
analyzed by the interpreter. The start block begins a process that continues to run
in parallel with other activities while the execute block is executed instantaneously and
uninterrupted. The user can trigger parameters by pretending to be an application object,
and then specifying the name of this object in front of the start or execute block
(MyClass.1::execute { ... }). The object then becomes an avatar of the user.
Changing Functions
Introducing source code allows us to add new functions (those that did not exist are
created) and to modify the existing functions (those that existed are replaced) at any time.
Declaring a function (its prototype) can even replace its definition; in this case calling the
function is prohibited until a new definition is introduced. These modifications also involve
the compiled code, by making it possible to choose between several implementations
provided in C++ or by going back to a definition in oRis.
Changing Classes
In the same way that it is possible to dynamically change functions, oRis can add,
complete and modify classes during execution. A class can be added at any time by
defining it during a call to the parse() function. When the interpreter comes across a
class definition, it may be defining a completely new class, in which case it is created, or
it may be defining an existing class, in which case it is completed or modified. Hence, it
is possible to add attributes and methods, as well as rewrite existing methods. Adding
methods is therefore very similar to adding functions: the method that did not exist, exists
now, and the method that existed, is replaced. Generally, comments about multiple
definitions and function declarations apply to methods. The situation, however, is a
little more delicate because the effects of new definitions combine with the effects of
overdefinitions. A modification that involves the method of a particular class can only
influence the derived classes if they do not overdefine the method in question. In return,
75
Tools
when there is no overdefinition, the modification instantly updates the entire hierarchy
of both the derived classes and the instances. In the same way, adding an attribute is
reflected in the derived classes and instances.
Changing Instances
An even more refined operation involves dynamically specializing the behavior of an
instance; the class then represents the behavior of the instance by default. In this case,
we reuse exactly the same principles for both the functions and the classes, in other
words: what does not already exist is added and what exists is replaced. These changes
are unique because they are applied to entities that were already created. There is a
definite distinction in comparison with the anonymous classes of Java that are created
and compiled well before the instances exist. Here, we provide a way to adjust the behavior
of an instance while it is in real-life to give it a better behavior in the studied case.
class Example
// Definition of the Example class
{
void new(void) {}
void delete(void) {}
void main(void)
// Initial main() method
{ println(this," is an Example !"); }
};
execute
{
Example ex;
for(int i=0;i<3;i++) ex=new Example;
string program=
format("int ",ex,"::i;
void ",ex,"::show(void)
{ print(++i,\" --> \"); }
void ",ex,"::main(void)
{ show(); Example::main(); }");
println(program);
parse(program);
}
// Code to be executed
// Instanstiate three Example
// Add an attribute to ‘ex’
// Add a method to ‘ex’
//
//
//
//
Overdefine main() of ’ex’
to use what was added
Display the composed character string
Interpret the composed character string
int Example.3::i;
// Add an attribute to ‘ex’
void Example.3::show(void)
// Add a method to ‘ex’
{ print(++i," --> "); }
void Example.3::main(void)
// Overdefine main() of ’ex’
{ show(); Example::main(); }
Example.3 is an Example !
Example.2 is an Example !
Example.1 is an Example !
Example.1 is an Example !
1 --> Example.3 is an Example !
Example.2 is an Example !
Example.1 is an Example !
Example.2 is an Example !
2 --> Example.3 is an Example !
Figure 4.6: Modifying an Instance
76
Applications
Figure 4.6 gives an example of using a parse() function to modify the behavior of
an instance. Composing a character string that involves one of the three instances allows
you to add an attribute and a method as well as the overdefinition of an existing method.
The distinction between class and instance is made naturally by using the range resolution
operator (::). Such modifications can of course be made several times during execution.
The dynamic functionalities presented here require no special programming technique
or device in the sense that there is no difference between the code form that you write
off-line and that of the code that you create dynamically on-line. The rules for rewriting
are the same in every case. This makes it possible in particular to develop a behavior
on an instance so that afterwards it can be applied to an entire class by simply changing
the established code range. Access to reference on an object-type constants makes it
possible to easily act on any instance. Indeed, if this lexical form did not exist, it would
be necessary to memorize a reference on each instance in order to be able to act at the
right moment. In oRis, the user does not have to worry about this kind of detail; if by
some means (graphics inspector, pointing in a window, etc.) we display the name of an
object, we can reuse it to make it execute processes or to modify it.
4.4
Applications
Virtual worlds must be realized, in other words, one must endeavor to update what is virtually
present in them, to know the intelligible models that structure them and the ideas that develop
them. The “fundamental virtue” of virtual worlds is to have been designed with an end in
mind. It is this end that must be realized, actualized, whether the application is industrial,
space-related, medical, artistic, or philosophical. The images of virtual must help us to reveal
the reality of virtual, which is of an intelligible order, and of intelligibility proportional to
the pursued end, theoretical or practical, utilitarian or contemplative [Quéau 93b]
4.4.1
Participatory Multi-Agent Simulations
The development of software engineering lead to the definition of the models, methods and methodologies that had been made operational by the various tools that implemented them. However, the expression software development environment is far too
vague: no universal set and, a fortiori, no tool covers all of the needs. In the more
specialized field of multi-agent system development, this same kind of diversity can be
found in methods [Iglesias 98], models (for example, a model for coordinating actions
such as the contract network [Smith 80]), languages (for example, MetateM [Fisher 94],
ConGolog [De Giacomo 00]), application builders such as ZEUS [Nwana 99], component
libraries such as JATLite7 , multi-agent system simulators like SMAS8 , toolkits, which are
generally class packages offering basic services (agent life cycle, communication, message
transferring, etc.) and classes implementing some components of a multi-agent system
([Boissier 99] describes a group of platforms developed by the French science community
7
JATLite: Java Agent Template Lite (java.stanford.edu/java agent/html)
8
SMAS: Simulation of Multiagent Asynchronous Systems (www.hds.utc.fr/~barthes/SMAS)
77
Tools
and [Parunak 99] makes reference to other tools of this type).
According to Shoham, agent-oriented programming system is made up of three basic
elements [Shoham 93].
1. A formal language for describing the mental states of agents, which, according to
Shoham, are modalities such as beliefs or commitments. As a result, a modal
logic is defined, as the author does in his Agent-0 language; the internal states of
a reactive agent can be described as a system of discrete events such as a Petri
network [Chevaillier 99a].
2. A language for programming agent behaviors that must respect the semantics of
mental states; thus, Agent-0 defines an interpretation for updating mental states
– which is based on the theory of language acts – and executing commitments.
It would be conceivable in this framework to use KQML9 and, if heterogeneity or
standardization constraints were imposed, a language respecting FIPA 10 specifications. In the case of a discrete events system, it would be a good idea to define the
semantics of the states machine (for example, an interpretation of a Petri network).
3. An “agentifier” that can convert a neutral component into a programmable agent;
Shoham admits to having very few ideas on this topic. Let us reformulate this
requirement by saying that it is necessary to have a language that allows you to
make agents operational, i.e., which allows you to make them implementable and
executable. We prefer to think of this third element as an implementation language.
The first two elements – which are closely related – are open research fields and the
types of multi-agent systems that we would like to develop must be selected carefully. To
integrate these elements into a general-purpose platform inevitably leads to two pitfalls:
platform instability, as the field is still open, and the diktat of a model, which is not
necessarily appropriate for all situations. In regard to the third element, there are several
possible choices, which are conditioned by the type of considered application and by the
technological characteristics of the agents execution environments. The two major types
of applications for multi-agent systems — the resolution of problems and simulation —
result in the following typology (according to [Nwana 96]).
. To solve problems:
on-board agents (e.g. physical agents (process controls) and interface agents):
there are heavy efficiency constraints for program execution and dependency in
regards to the technology of the target system (many applications are written
in C or in C++);
mobile agents: it is better to use a language which can be interpreted on a
large community of operating systems; using script languages such as Perl is
possible;
rational agents: programming in logic with a language such as Prolog can be
perfect for meeting this type of need.
9
10
78
KQML: Knowledge Query and Manipulation Language (www.cs.umbc.edu/kqml)
FIPA: Foundation for Intelligent Physical Agents (www.fipa.org)
Applications
. For simulation: it is necessary to have an environment where different agent behaviors can be executed in parallel and which offers a wide range of components
to interface with users. The choice depends on the languages being executed on a
virtual machine that allows parallel programming: in the past Smalltalk’80 and,
more and more, Java (the development of the DIMA platform gives evidence of this
trend [Guessoum 99]).
We are well aware that this classification, like a lot of others, is somewhat arbitrary
and that different languages can be applied to different fields (everything can be done in
assembler!) and there are several solutions to the same problem. Its only pretense is
to explain the oRis positioning to the reader as a participatory multi-agent simulation
platform. oRis was designed to meet both the need for an implementation language
for multi-agent systems and these systems’ participatory simulation requirements. The
entire oRis environment and language represent more than one hundred thousand lines
of code mainly written in C++ but also in Flex++ and Bison++, and in oRis itself. oRis
is completely operational and stable and is already being used in many projects.
4.4.2
Application Fields
oRis is used as a teaching tool in certain courses given at the Ecole Nationale d’Ingénieurs
de Brest (National Graduate Engineering School of Brest): for example, programming by
concurrent objects, multi-agent systems, distributed virtual reality, adaptive control. It
is also used by other educational institutions: Military Schools of Saint-Cyr Coëtquidan,
ENSI (National School of Information Sciences) of Bourges, ENST (National Graduate
Engineering School of Telecommunications of Brest) in Brittany, IFSIC (Institution for
Fuzzy Systems and Intelligent Control)– DEUG (Diploma of General University Studies) University of Rennes 1, DEA (post-Master’s degree) from the University of Toulouse
(IRIT), Institute of Technology in Computing of Bordeaux, and a Master’s degree from
the University of Caen.
oRis is the platform on which research works from the Laboratoire d’Informatique
Industrielle (Laboratory for Industrial Data Processing) (LI2) are conducted, which results in the development of many class packages. Among these are packages that coordinate actions according to Contract Net Protocol, agent distribution [Rodin 99], communication between agents by using KQML [Nédélec 00], the use of fuzzy cognitive maps
[Maffre 01, Parenthoën 01], the declaration of collective action plans by using an executable extension of Allen’s temporal logic [De Loor 00] and the definition of agent behaviors as trends [Favier 01]. The platform is also used in image processing [Ballet 97b],
medical simulation [Ballet 98b] and in the simulation of vehicle manufacturer systems
[Chevaillier 99b].
Other research teams also used oRis for their works: the CREC laboratory (Saint-Cyr
Coëtquidan) for simulating battle fields, conflicts and cyber warfare, the Naval School’s
SIGMa team for constructing dynamic digital land models from probe points, the GREYC
laboratory (University of Caen) for simulating computer networks, the SMAC and GRIC
teams from the IRIT (Toulouse), and the UMR CNRS 6553 of Ecobiology (University of
Rennes) for simulations in ethology.
79
Tools
4.4.3
The ARéVi Platform
The Laboratoire d’Informatique Industrielle (Laboratory for Industrial Data Processing) has developed an Atelier de Réalité Virtuelle (Virtual Reality Workshop) also known
as ARéVi, which is a toolkit for creating distributed virtual reality applications. The first
three versions of ARéVi were built around the Open Inventor graphics library and did
not include the multi-agent approach. In version 4, oRis was used to develop ARéVi.
The kernel of ARéVi is none other than oRis, and thus all of the possibilities that were
described previously are available; it is extended by the C++ code offering functionalities
that are specific to virtual reality [Duval 97, Reignier 98]. This platform offers a graphics rendering that is completely independent from the one provided by oRis. Graphics
objects are loaded directly from files in VRML211 format (you can define animations and
manage the levels of detail). Graphics elements, such as transparent or animated textures,
light sources, lens flare (sunlight on a lens) and particle systems (water streams) are available. ARéVi also uses kinematics notions (speeds and linear and angular accelerations),
which adds to the possibilities for expressing agent behaviors in a three-dimensional environment. In the sound area, ARéVi offers a space sound system and synthesis, as well
as voice recognition functionalities. This platform manages various peripherals, such as
a dataglove, a control lever, a steering wheel, localization sensors, a force-feedback arm
and a stereoscopic head-mounted display, which widen the options for users to immerse
themselves in multi-agent systems.
It is important to note that the multiple tools offered by ARéVi for multi-sensory
rendering of the virtual world are oRis objects similar to the others. It is the instances
that can change the application and play an active role. It is therefore possible to adapt
them to the considered subject, and to even give them an evolved behavior that meets
the application’s specific needs. Thus, no matter what application subject you are dealing
with, you can always think of a way to customize rendering tools in order to make it easier
to use them in the chosen context. We can, for example, adapt a rendering window so
that it automatically degrades the quality of its display when the image refresh frequency
becomes too low. We can also associate a geographical area with a scene and give it
a behavior that tends to integrate entities entering into this area and stops displaying
entities that leave.
In regards to the distribution of graphics entities on different machines, the functionalities involving network communication provided by oRis were used to establish communication between remote entities and a dead-reckoning [Rodin 00] mechanism. When
an entity is created on a machine, copies that have a degraded kinematic behavior on the
other machines are made, and their geometric and kinematic characteristics are updated
when there is too much of a divergence between the real situation and the copies.
Currently, ARéVi has been used within the framework of interactive prototyping of
a stamping cell [Chevaillier00] and the development of a training platform for French
Emergency Services [Querrec01a] (Figure 4.7 [Querrec 01b]).
11
80
VRML : Virtual Reality Modeling Language (www.vrml.org)
Conclusion
Figure 4.7: An ARéVi Application: Training Platform for French Emergency Services
4.5
Conclusion
Choosing a language is to choose a method of thinking [Tisseau 98a]. To prove this
point, all one has to do is turn to the famous tale of a thousand and one nights: Ali
Baba and the Forty Thieves (Ali Baba Metaphor, Figure 4.8). There are three possible
scenarios: procedural programming, object-oriented programming or agent programming.
1. In procedural programming (Figure 4.8a), the cave’s door is passive data (Data
Sesame) manipulated by the agent AliBaba: it is AliBaba that intends to open the
door, and he is the one who knows how to open it (entrance procedure Open: I go
to the door, I grab the doorknob, I turn the doorknob, I push the door).
2. In object-oriented programming (Figure 4.8b), the cave door is now an object
(Object Sesame) that has delegated the know-how to open the door (example:
an elevator door): it is still AliBaba who intends on opening the door, but now the
door knows how to do it (Open method from the Door class). To open the door,
AliBaba must send it the correct message12 (Open Sesame!: Sesame->Open()).
Consequently, the door opens up in accordance with a master-slave schema.
3. In agent programming (Figure 4.8c), the cave door is an agent whose goal is to open
when it detects a passer-by (example: an airport door equipped with cameras):
the door has both the intention and the know-how (Agent Sesame). Whether or
not AliBaba intends on going through the door or not, the door can open if it
12
Ali Baba already knew programming by objects! This analogy between sending a message in programming by objects and the Ali Baba formula (Open Sesame) is given in [Ferber 90].
81
Tools
execute
{
Object Sesame = new Data;
Agent AliBaba = new Avatar;
}
execute
{
Object Sesame = new Door;
Agent AliBaba = new Avatar;
}
execute
{
Agent Sesame = new Door;
Agent AliBaba = new Avatar;
}
AliBaba::execute
{
Open(Sesame);
}
AliBaba::execute
{
Sesame->open();
}
a. Procedural Programming
b. Object-Oriented Programming
void Door::main(void)
{
if(view(anObject)) open();
else close();
}
c. Agent Programming
Figure 4.8: Metaphor of Ali Baba and Programming Paradigms
detects him: eventually it can even negotiate its opening. The user AliBaba is thus
immersed into the universe of agents through an avatar, which can be detected by
the door.
Hence, choosing a language is important when building an application. Agent programming such as we develop, is best suited, by construction, for autonomizing the entities
that make up our virtual universes.
This is why we have chosen to make oRis a generic language for implementing multiagent systems that makes it possible to write programs based on the interaction of objects
and agents while placed in a space-time environment and controlled by the user’s on-line
actions. oRis is also a participatory multi-agent simulation environment, which makes
it possible to control agent scheduling and interactive modeling through the language’s
dynamicity. oRis is stable and operational13 and has already led to many applications
such as ARéVi, the virtual reality workshop.
13
oRis can be used for free by downloading it from the Fabrice Harrouet’s homepage
(www.enib.fr/~harrouet/oris.html). Documentation, examples, course material and the thesis dissertation [Harrouet 00] are also available on this page.
82
Chapter 5
Conclusion
5.1
Summary
Virtual objects, just like the space in which they appear, are actors and agents. Equipped with
memory, they have functions for processing information and an autonomy, which is regulated
by their programs. A strange, intermediary artificial life continually moves through virtual
worlds. Each entity, each object, each agent can be considered as an expert system that has
its own behavior rules and applies or adapts them in response to changes in the environment,
modifications of the rules and metarules that govern the virtual world. [Quéau 93b]
For a dozen years, our works have taken place in the field of virtual reality, a new
discipline of engineering sciences that involves the specification, design and creation of
realistic and participatory universes. There is no way that we could completely cover
all of the aspects of a synthesis discipline like virtual reality, so we tried to answer the
questions of what concepts ?, what models ? and what tools ? for virtual reality. Today, we
have answered these questions through a principle of autonomy, a multi-agent approach
and a participatory multi-agent simulation environment.
What Concepts?
The way that we think about virtual reality epistemologically has made the concept of
autonomy the core of our research problem. Our approach is thus based on a principle of
autonomy according to which the autonomization of digital models that make up virtual
universes is vital to the reality of these universes.
Autonomizing models means equipping them with sensory-motor means of communication and ways to coordinate perceptions and actions. This autonomy by conviction
proved to be an autonomy by essence for organism models, an autonomy by necessity for
mechanism models and an autonomy by ignorance for models of complex systems, which
are characterized by a large variety of components, structures and interactions.
The human user is represented within the virtual universe by an avatar, a model
among models, through which he controls the coordination of perceptions and actions.
The human user is connected to his avatar through language and adapted behavioral
interfaces, which make it possible to have a triple mediation of senses, action and language.
Consequently, by a sort of digital empathy, he feels as if he is actually in the virtual
universe whose multi-sensory rendering is that of realistic, computer-generated images:
3D, sound, touch, kinesthetics, proprioceptive, animated in real-time, and shared on
83
Conclusion
computer networks.
Thus, according to the Pinocchio metaphor, a human user, more autonomous because
he is partially freed from controlling his models, will participate fully in these virtual
realities: he will go from being a simple spectator to an actor and even a creator of these
evolving virtual worlds.
What Models?
Based on this principle of autonomy, we modeled these virtual universes through
multi-agent systems that allow human users to participate in simulations.
We have highlighted the auto-organizing properties of these autonomous entities-based
systems through the significant and original example of blood coagulation. We have hence
shown that participatory multi-agent simulations of virtual reality have given today’s
biologists real virtual laboratories in which they can conduct a new type of experiment
that is cost-effective and safe: in virtuo experiment. This in virtuo experiment adds
to the traditional investigations which are in vivo and in vitro experiments, or in silico
calculations.
To go beyond the realm of reactive behaviors, we described emotional behaviors
through fuzzy cognitive maps. We use these maps to both specify the behavior of an
agent and control its movement. The agent can also auto-evaluate its behavior by simulating its map itself: this simulation within simulation is therefore an outline of the
perception of the self, which is necessary for real autonomy. Implementing these fuzzy
cognitive maps makes it possible to characterize credible agent roles in interactive fictions.
What Tools?
To implement these models, we have developed a language — the oRis language —
which favors autonomy by construction of the entities that make up our virtual universes.
Thus, oRis is a generic implementation language for multi-agent systems, which makes
it possible to write programs based on objects and agents in interaction, placed in a spacetime environment and controlled by the user’s on-line actions. oRis is also a participatory
multi-agent simulation environment that makes it possible to control the schedule of agents
and interactively model through the dynamicity of language.
We have paid careful attention to the associated simulator in order to demonstrate
that it was built so that the activation process for autonomous entities does not lead to
bias in the simulation, for which the execution platform would be solely responsible. In
this framework, execution error robustness proved to be a particularly sensitive point,
especially since the simulator provides access to the entire on-line language.
oRis is stable and operational: it has already lead to many applications among which
our virtual reality workshop ARéVi.
84
Perspectives
5.2
Perspectives
Microscope, telescope: these words evoke important scientific breakthroughs towards the infinitely small, and towards the infinitely large. [...] Today, we are confronted with another
infinite: the infinitely complex. But this time there is no more instrument. Nothing but the
mind, an intelligence and logic ill-equipped in front of the immense complexity of life and
society. [...] Therefore, we need a new tool. [...] This tool, I’ll call it the macroscope (macro,
big; and skopein, to observe). The macroscope is not a tool like all the others. It is a symbolic instrument, made from a collection of methods and techniques borrowed from a wide
range of disciplines. Evidently, it is useless to try and find it in the laboratories or research
centers. And yet, it is used by many in very different fields. Because, the macroscope can
be considered as the symbol of a new way to see, understand, and act. [De Rosnay 75]
We do not claim to have completely answered the questions of what concepts ?, what
models? and what tools? for virtual reality, which have fueled our research for the last
ten years: We also intend to vigorously pursue our research in other directions.
What Tools?
The participatory multi-agent simulation of virtual universes made up of autonomous
entities will allow us to research a phenomenon that has become complex because modeling
is so diverse. Therefore, virtual reality that uses numerous on-line models must be given
engineering tools for these models.
The oRis environment is only a middleware for virtual reality; it is definitely reliable
and robust, but its abstraction level is still too low for users. So, we must incorporate tools
that have the highest abstraction levels possible to make it easier to implement models,
debug them, integrate them within existing universes and share them through computer
networks. These tools will automatically generate the corresponding oRis code, which
will be interpreted on-line by the simulator.
What Models?
We have successfully tested the oRis platform in the case of reactive and emotional
behaviors. We now need to tackle intentional, social and adaptive behaviors.
We will approach intentional behavior modeling through the notion of objective, which
is neither a constraint in the sense of programming by constraints, nor a goal of programming in logic. In these two approaches, the solution takes place according to a synchronous
hypothesis of an insignificant execution time: the context is seen as unchanged during the
resolution. However, in reality, constraints or goals can change during resolution: time
goes by, the context changes, and this needs to be taken into account. The objective
notion thus takes into account the irreversibility of the time that goes by.
Social behaviors will be researched through the notion of the organization of multiagent systems. An organization defines the roles and gives them objectives; the assignment
of roles is then negotiated between autonomous agents that decide to cooperate within
this organization.
By participating in multi-agent simulations, the human user can, according to a principle of substitution, take control of an entity and identify himself as that entity. The
controlled entity can then learn behaviors that are better adapted to its environment from
85
Conclusion
the operator. It is this type of learning, by example or by imitation, that we will introduce
for evolving behaviors modeling.
What Concepts?
Virtual universes are open and heterogeneous; they are made up of atomic and composite entities that are mobile and distributed in space, and in a variable number in time.
The components can be structured in organizations, imposed or emergent because of multiple interactions between components. Interactions between entities are themselves of a
different nature and operate on different space and time scales.
Unfortunately, today, there is no formalism capable of recognizing this complexity.
Only virtual reality makes it possible to live this complexity. We will need to strengthen
the relationships between virtual reality and the theories of complexity in order to make
virtual reality a tool for investigating complexity, like the macroscope described in the
quote at the beginning of this section.
The human user becomes a part of these virtual worlds through an avatar. The
structural identity between an agent and an avatar allows the user to take the place of an
agent at any time by taking control of its decision-making module. And conversely, he can
at any time give the control back to the agent whose place he took. An in virtuo autonomy
test will be able to evaluate the quality of the substitution, which will be positive if a user
interacting with an entity cannot tell if he is interacting with an agent or another user;
the agents should be able to react as if they were interacting with another agent.
This principle of substitution completes our principle of autonomy, and the consequences will have to be evaluated on an epistemological level as well as an ethical level.
86
Bibliography
[Abelson 85]
Abelson H., Sussman G.J., Sussman J., Structure and interpretation of
computer programs (1985)
[Agarwal 95]
Agarwal P., The Cell programming Language, Artificial Life 2(1):37-77,
1995
[Akeley 89]
Akeley K., The Silicon Graphics 4D/240 GTX super-workstation, IEEE
Computer Graphics and Applications 9(4):239-246, 1989
[Ameisen 99]
Ameisen J.C., La sculpture du vivant : le suicide cellulaire ou la mort
créatrice, Editions du Seuil, Paris, 1999
[Ansaldi 85]
Ansaldi S., De Floriani L., Falcidieno B., Geometric modeling of solid
objects by using a face adjacency graph representation, Proceedings
Siggraph’85:131-139, 1985
[Arnaldi 89]
Arnaldi B., Dumont G., Hégron G., Dynamics and unification of animation control, The Visual Computer 4(5):22-31, 1989
[Arnaldi 94]
Arnaldi B., Modèles physiques pour l’animation, Document d’habilitation à diriger des recherches, Université de Rennes 1 (France), 1994
[Austin 62]
Austin J.L., How to do things with words, Oxford University Press, Oxford, 1962
[Avouris 92]
Avouris N.M., Gasser L. (éditeurs), Distributed Artificial Intelligence :
theory and praxis, Kluwer, Dordrecht, 1992
[Axelrod 76]
Axelrod R., Structure of Decision, Princeton University Press, Princeton, 1976
[Bachelard 38]
Bachelard G., La formation de l’esprit scientifique (1938), Vrin, Paris,
1972
[Badler 93]
Badler N.I., Phillips C.B., Webber B.L., Simulating humans : computer
graphics animation and control, Oxford University Press, New York,
1993
87
Bibliography
[Ballet 97a]
Ballet P., Rodin V., Tisseau J., A multiagent system to model an human
secondary immune response, Proceedings IEEE SMC’97:357-362, 1997
[Ballet 97b]
Ballet P., Rodin V., Tisseau J., A multiagent system to detect concentric
strias, Proceedings SPIE’97 3164:659-666, 1997
[Ballet 98a]
Ballet P., Pers J.O., Rodin V., Tisseau J., A multi-agent system to
simulate an apoptosis model of B-CD5 cell, Proceedings IEEE SMC’98
2:1-5, 1998
[Ballet 98b]
Ballet P., Rodin V., Tisseau J., Intérêts mutuels des systèmes multiagents et de l’immunologie, Proceedings CID’98:141-153, 1998
[Ballet 00a]
Ballet P., Abgrall J.F., Rodin V., Tisseau J., Simulation of thrombin
generation during plasmatic coagulation and primary hemostasis, Proceedings IEEE SMC’00 1:131-136, 2000
[Ballet 00b]
Ballet P., Intérêts mutuels des systèmes multi-agents et de l’immunologie. Application à l’immunologie, à l’hématologie et au traitement
d’images, Thèse de Doctorat, Université de Bretagne Occidentale, Brest
(France), 2000
[Bannon 91]
Bannon L.J., From human factors to human actors : the role of psychology and human-computer interaction studies in system-design, dans
[Greenbaum 91]:25-44, 1991
[Baraquin 95]
Baraquin N., Baudart A., Dugué J., Laffitte J., Ribes F., Wilfert J.,
Dictionnaire de philosophie, Armand Colin, Paris, 1995
[Barr 84]
Barr A.H., Global and local deformations of solid primitives, Computer
Graphics 18:21-30, 1984
[Barto 81]
Barto A.G., Sutton R.S., Landmark learning : an illustration of associative search, Biological Cybernetics 42:1-8, 1981
[Bates 92]
Bates J., Virtual reality, art, and entertainment, Presence 1(1):133-138,
1992
[Batter 71]
Batter J.J., Brooks F.P., GROPE-1: A computer display to the sense of
feel, Proceedings IFIP’71:759-763, 1971
[Baumgart 75]
Baumgart B., A polyhedral representation for computer vision, Proceedings AFIPS’75:589-596, 1975
[Beer 90]
Beer R.D., Intelligence as adaptive behavior : an experiment in computational neuroethology, Academic Press, San Diego, 1990
[Benford 95]
Benford S., Bowers J., Fahlen L.E., Greehalgh C., Mariani J., Rodden
T., Networked virtual reality and cooperative work, Presence 4(4):364386, 1995
88
Bibliography
[Bentley 79]
Bentley J.L., Ottmann T.A., Algorithms for reporting and counting geometric intersections, IEEE Transaction on Computers 28(9):643-647,
1979
[Berlekamp 82]
Berlekamp E.R., Conway J.H., Guy R.K., Winning ways for your mathematical plays, Academic Press, New York, 1982
[Berthoz 97]
Berthoz A., Le sens du mouvement, Editions Odile Jacob, Paris, 1997
[Bertin 78]
Bertin M., Faroux J-P., Renault J., Optique géométrique, Bordas, Paris,
1978
[Bézier 77]
Bézier P., Essai de définition numérique des courbes et des surfaces
expérimentales, Thèse d’Etat, Université Paris 6 (France), 1977
[Blinn 76]
Blinn J.F., Newell M.E., Texture and reflection in computer generated
images, Communications of the ACM 19(10):542-547, 1976
[Blinn 82]
Blinn J.F., A generalization of algebraic surface drawing, ACM Transactions on Graphics 1(3):235-256, 1982
[Bliss 70]
Bliss J.C., Katcher M.H., Rogers C.H., Shepard R.P., Optical-to-tactile
image conversion for the blind, IEEE Transactions on Man-Machine
Systems 11(1):58-65, 1970
[Boissier 99]
Boissier O., Guessoum Z., Occello M. (éditeurs), Plateformes de
développement de systèmes multi-agents, Groupe ASA, Bulletin de
l’AFIA n◦ 39, 1999
[Bouknight 70]
Bouknight W.J., A procedure for generation of three-dimensional halftoned computer graphics presentations, Communications of the ACM
13(9):527-536, 1970
[Boulic 90]
Boulic R., Magnenat-Thalmann N., Thalmann D., A global human walking model with real-time kinematic personification, Visual Computer
6(6):344-358, 1990
[Bresenham 65] Bresenham J., Algorithm for computer control of a digital plotter, IBM
System Journal 4:25-30, 1965
[Brooks 86a]
Brooks F.P., Walkthrough : a dynamic graphics system for simulating
virtual buildings, Proceedings Interactive 3D Graphics’86:9-21, 1986
[Brooks 90]
Brooks F.P., Ouh-young M., Batter J.J., Kilpatrick P.J., Project
GROPE : haptic displays for scientific visualization, Computer Graphics, 24(4):177-185, 1990
[Brooks 86]
Brooks R.A., A robust layered control system for a mobile robot, IEEE
Journal of Robotics and Automation 2(1):14-23, 1986
89
Bibliography
[Brooks 91]
Brooks R.A., Intelligence without representation, Artificial Intelligence
47:139-159, 1991
[Bryson 96]
Bryson S., Virtual reality in scientific visualization, Communications of
the ACM 39(5):62-71, 1996
[Burdea 93]
Burdea G., Coiffet P., La réalité virtuelle, Hermès, Paris, 1993
[Burtnyk 76]
Burtnyk N., Wein M., Interactive skeleton techniques for enhancing motion dynamics in key frame animation, Communications of the ACM
19(10):564-569, 1976
[Cadoz 84]
Cadoz C., Luciani A., Florens J.L., Responsive input devices and sound
synthesis by simulation of instrumental mechanisms : the Cordis system,
Computer Music Journal 8(3):60-73, 1984
[Cadoz 93]
Cadoz C., Luciani A., Florens J.L., CORDIS-ANIMA: a modeling and
simulation system for sound and image synthesis - the general formalism, Computer Music Journal 17(1):19-29, 1993
[Cadoz 94a]
Cadoz C., Les réalités virtuelles, Collection DOMINOS, Flammarion,
Paris, 1994
[Cadoz 94b]
Cadoz C., Le geste, canal de communication homme/machine : la communication instrumentale, Technique et Science Informatiques 13(1):3161, 1994
[Calvert 82]
Calvert T.W., Chapman J., Patla A.E., Aspects of the kinematic simulation of human movement, IEEE Computer Graphics and Applications
2:41-50, 1982
[Capin 97]
Capin T.K., Pandzic I.S., Noser H., Magnenat-Thalmann N., Thalmann
D., Virtual human representation and communication in VLNET networked virtual environments, IEEE Computer Graphics and Applications 17(2):42-53, 1997
[Capin 99]
Capin T.K., Pandzic I.S., Magnenat-Thalmann N., Thalmann D.,
Avatars in networked virtual environments, John Wiley, New York, 1999
[Carlsson 93]
Carlsson C., Hagsand O., DIVE – A platform for multi-user virtual
environments, Computers and Graphics 17(6):663-669, 1993
[Catmull 74]
Catmull E., A subdivision algorithm for computer display of curved surfaces, PhD Thesis, University of Utah (USA), 1974
[Celada 92]
Celada F., Seiden P.E., A computer model of cellular interactions in the
immune system, Immunology Today 13(2):56-62, 1992
90
Bibliography
[Cerf 74]
Cerf V., Kahn R., A protocol for packet network interconnection, IEEE
Transactions on Communication 22:637-648, 1974
[Chadwick 89]
Chadwick J.E., Haumann D.R., Parent R.E., Layered construction
for deformable animated characters, Computer Graphics 23(3):243-252,
1989
[Chevaillier 99a] Chevaillier P., Harrouet F., DeLoor P., Application des réseaux de Petri
à la modélisation des systèmes multi-agents de contrôle, APII-JESA
33(4):413-437, 1999
[Chevaillier 99b] Chevaillier P., Harrouet F., Reignier P., Tisseau J., oRis : un environnement pour la simulation multi-agents des systèmes manufacturiers de
production, Proceedings MOSIN’99:225-230, 1999
[Chevaillier 00]
Chevaillier P., Harrouet F., Reignier P., Tisseau J., Virtual reality and
multi-agent systems for manufacturing system interactive prototyping,
International Journal of Design and Innovation Research 2(1):90-101,
2000
[Clark 76]
Clark J.H., Hierarchical geometric models for visible surface algorithm,
Communications of the ACM 19(10):547-554, 1976
[Cliff 93]
Cliff D., Harvey I., Husbands P., Explorations in evolutionary robotics,
Adaptive Behavior 2(1):73-110, 1993
[Cohen 85]
Cohen M.F., Greenberg D.P., The hemi-cube : a radiosity solution for
complex environment, Computer Graphics 19(3):26-35, 1985
[Cohen-Tannoudji 95] Cohen-Tannoudji G. (édité par), Virtualité et réalité dans les sciences, Editions Frontières, Paris, 1995
[Cook 81]
Cook R.L., Torrance K.E., A reflectance model for computer graphics,
Computer Graphics 15(3):307-316, 1981
[Cosman 81]
Cosman M.A., Schumacker R.A., System strategies to optimize CIG image content, Proceedings Image II’81:463-480, 1981
[De Giacomo 00] De Giacomo G., Lésperance Y., Levesque H.J., ConGolog : a concurrent
programming language based on situation calculus, Artificial Intelligence
121(1-2):109-169, 2000
[Del Bimbo 95]
Del Bimbo A., Vicario E., Specification by example of virtual agents
behavior, IEEE Transactions on Visualization and Computer Graphics
1(4):350-360, 1995
[De Loor 00]
De Loor P., Chevaillier P., Generation of agent interactions from temporal logic specifications, Proceedings IMACS’00:, 2000
91
Bibliography
[Demazeau 95]
Demazeau Y., From interactions to collective behaviour in agent-based
systems, Proceedings European Conference on Cognitive Science’95:117132, 1995
[De Rosnay 75] De Rosnay J., Le macroscope : vers une vision globale, Editions du Seuil,
Paris, 1975
[D’Espagnat 94] D’Espagnat B., Le réel voilé, Fayard, Paris, 1994
[Dickerson 94]
Dickerson J.A., Kosko B., Virtual worlds as fuzzy cognitive maps, Presence 3(2):173-189, 1994
[Donikian 94]
Donikian S., Les modèles comportementaux pour la génération du mouvement d’objets dans une scène, Revue internationale de CFAO et
d’infographie 9(6):847-871, 1994
[Drogoul 93]
Drogoul A., De la simulation multi-agents à la résolution collective
de problèmes. Une étude de l’émergence de structures d’organisation
dans les systèmes multi-agents, Thèse de Doctorat, Université Paris 6
(France), 1993
[Duval 97]
Duval T., Morvan S., Reignier P., Harrouet F., Tisseau J., ARéVi : une
boı̂te à outils 3D pour des applications coopératives, Revue Calculateurs
Parallèles 9(2):239-250, 1997
[Ellis 91]
Ellis S.R., Nature and origin of virtual environments : a bibliographic
essay, Computing Systems in Engineering 2(4):321-347, 1991
[Endy 01]
Endy D., Brent R., Modelling cellular behaviour, Nature 409(18):391395, 2001
[Favier 01]
Favier P.A., De Loor P., Tisseau J., Programming agent with purposes :
application to autonomous shooting in virtual environments, Lecture
Notes in Computer Science 2197:40-43, 2001
[Ferber 90]
Ferber J., Conception et programmation par objets, Hermès, Paris, 1990
[Ferber 95]
Ferber J., Les systèmes multi-agents : vers une intelligence collective,
InterEditions, Paris, 1995
[Ferber 97]
Ferber J., Les systèmes multi-agents : un aperçu général, Technique et
Science Informatique 16(8):979-1012, 1997
[Fikes 71]
Fikes R.E., Nilsson N., STRIPS : a new approach to the application of
theorem proving to problem solving, Artificial Intelligence 5(2):189-208,
1971
[Fisher 86]
Fisher S.S., MacGreevy M., Humphries J., Robinett W., Virtual environment display system, Proceedings Interactive 3D Graphics’86:77-87,
1986
92
Bibliography
[Fisher 94]
Fisher M., A survey of Concurrent Metatem : the language and its
applications, Lecture Notes in Artificial Intelligence 827:480-505, 1994
[Fuchs 80]
Fuchs H., Kedem Z., Naylor B., On visible surface generation by a priori
tree structures, Computer Graphics 14(3):124-133, 1980
[Fuchs 96]
Fuchs P., Les interfaces de la réalité virtuelle, AJIIMD, Presses de l’Ecole des Mines de Paris, 1996
[Fujimoto 98]
Fujimoto R.M., Time management in the High Level Architecture, Simulation 71(6):388-400, 1998
[Gasser 92]
Gasser L., Briot J.P., Object-based concurrent programming and distributed artificial intelligence, dans [Avouris 92]:81-107, 1992
[Gaussier 98]
Gaussier P., Moga S., Quoy M., Banquet J.P., From perception-action
loops to imitation process : a bottom-up approach of learning by imitation, Applied Artificial Intelligence 12:701-727, 1998
[Georgeff 87]
Georgeff M.P., Lansky A.L., Reactive reasonning and planning, Proceedings AAAI’87:677-682, 1987
[Gerschel 95]
Gerschel A., Liaisons intermoléculaires, InterEditions/CNRS Editions,
Paris, 1995
[Gibson 84]
Gibson W., Neuromancer, Ace Books, New York, 1984
[Ginsberg 83]
Ginsberg C.M., Maxwell D., Graphical marionette, Proceedings ACM
SIGGRAPH/SIGART Workshop on Motion’83:172-179, 1983
[Girard 85]
Girard M., Maciejewski A.A., Computational modeling for the computer
animation of legged figures, Computer Graphics 19(3):39-51, 1985
[Göbel 93]
Göbel M., Neugebauer J., The virtual reality demonstration centre, Computer and Graphics 17(6):627-631, 1993
[Gouraud 71]
Gouraud H., Continuous shading of curved surface, IEEE Transactions
on Computers 20(6):623-629, 1971
[Gourret 89]
Gourret J.P., Magnenat-Thalmann N., Thalmann D., The use of finite
element theory for simulating object and human skin deformations and
contacts, Proceedings Eurographics’89:477-487, 1989
[Grand 98]
Grand S., Cli D., Creatures : entertainment software agents with articial
life, Autonomous Agents and Multi-Agent Systems 1(1):39-57, 1998
[Granger 86]
Granger G.G., Pour une épistémologie du travail scientifique, dans
[Hamburger 86]:111-129, 1986
93
Bibliography
[Granger 95]
Granger G.G., Le probable, le possible et le virtuel, Editions Odile Jacob,
Paris, 1995
[Greenbaum 91] Greenbaum J., Kyng M. (éditeurs), Design at work : cooperative design
of computer systems, Lawrence Erlbaum Associates, Hillsdale, 1991
[Guessoum 99]
Guessoum Z., Briot J.P., From active objects to autonomous agents,
IEEE Concurrency 7(3):68-76, 1999
[Guillot 00]
Guillot A., Meyer J.A., From SAB94 to SAB2000 : What’s New, Animat?, Proceedings From Animals to Animats’00 6:1-10, 2000
[Hamburger 86] Hamburger J. (sous la direction de), La philosophie des sciences aujourd’hui, Académie des Sciences, Gauthier-Villars, Paris, 1986
[Hamburger 95] Hamburger J., Concepts de réalité, dans “ Encyclopædia Universalis ”
19:594-597, Encyclopædia Universalis, Paris, 1995
[Harrouet 00]
Harrouet F., oRis : s’immerger par le langage pour le prototypage
d’univers virtuels à base d’entités autonomes, Thèse de Doctorat, Université de Bretagne Occidentale, Brest (France), 2000
[Hayes-Roth 96] Hayes-Roth B., van Gent R., Story-making with improvisational puppets
and actors, Technical Report KSL-96-05, Knowledge Systems Laboratory, Stanford University, 1996
[Hemker 65]
Hemker H.C., Hemker P.W., Loeliger A., Kinetic aspects of the interaction of blood clotting enzymes, Thrombosis et diathesis haemorrhagica
13:155-175, 1965
[Herrmann 98]
Herrmann H.J., Luding S., Review article : Modeling granular media on
the computer, Continuum Mechanics and Thermodynamics 10(4):189231, 1998
[Hertel 83]
Hertel S., Mehlhorn K., Fast triangulation of simple polygons, Lecture
Notes in Computer Science 158:207-218, 1983
[Hewitt 73]
Hewitt C., Bishop P., and Steiger R., A universal modular actor formalism for artificial intelligence, Proceedings IJCAI’73:235-245, 1973
[Hewitt 77]
Hewitt C., Viewing control structures as patterns of message passing,
Artificial Intelligence 8(3):323-364, 1977
[Hofstadter 79]
Hofstadter D., Gödel, Escher, Bach : an eternal golden braid (1979),
traduction française : InterEditions, Paris, 1985
[Holloway 92]
Holloway R., Fuchs H., Robinett W., Virtual-worlds research at the University of North Carolina at Chapel Hill, Computer Graphics’92, 15:1-10,
1992
94
Bibliography
[Hopkins 96]
Hopkins J.C., Leipold R.J., On the dangers of adjusting the parameter
values of mechanism-based mathematical models, Journal of Theoretical
Biology 183:417-427, 1996
[IEEE 93]
IEEE 1278, Standard for Information Technology : protocols for Distributed Interactive Simulation applications, IEEE Computer Society
Press, 1993
[Iglesias 98]
Iglesias C.A., Garijo M., Gonzalez J.C., A survey of agent-oriented
methodologies, Proceedings ATAL’98:317-330, 1998
[Isaacs 87]
Isaacs P.M., Cohen M.F., Controlling dynamic simulation with kinematic constraints, behavior functions and inverse dynamics, Computer
Graphics 21(4):215-224, 1987
[Jamin 96]
Jamin C., Lydyard P.M., Le Corre R., Youinou P., Anti-CD5 extends
the proliferative response of human CD5+B cells activated with anti-IgM
and IL2, European Journal of Immunology 26:57-62, 1996
[Jones 71]
Jones C.B., A new approach to the hidden line problem, Computer Journal 14(3):232-237, 1971
[Jones 94]
Jones K.C., Mann K.G., A model for the Tissue Factor Pathway to
Thrombin: 2. A Mathematical Simulation, Journal of Biological Chemistry 269(37):23367-23373, 1994
[Kallmann 99]
Kallmann M., Thalmann D., A behavioral interface to simulate agent-object interactions in real time, Proceedings Computer Animation’99:138146, 1999
[Klassen 87]
Klassen R., Modelling the effect of the atmosphere on light, ACM Transactions on Graphics 6(3):215-237, 1987
[Klesen 00]
Klesen M., Szatkowski J., Lehmann N., The black sheep : interactive
improvisation in a 3D virtual world, Proceedings I3’00:77-80, 2000
[Kodjabachian 98] Kodjabachian J.,Meyer J.A., Evolution and development of neural controllers for locomotion, gradient-following, and obstacle-avoidance in artificial insects, IEEE Transactions on Neural Networks 9:796-812, 1998
[Kochanek 84]
Kochanek D.H., Bartels R.H., Interpolating splines with local tension,
continuity, and bias control, Computer Graphics 18(3):124-132, 1984
[Kolb 95]
Kolb C., Mitchell D., Hanrahan P., A realistic camera model for computer graphics, Computer Graphics 29(3):317-324, 1995
[Kosko 86]
Kosko B., Fuzzy Cognitive Maps, International Journal Man-Machine
Studies 24:65-75, 1986
95
Bibliography
[Kosko 92]
Kosko B., Neural networks and fuzzy systems : a dynamical systems
approach to machine intelligence, Prentice-Hall, Engelwood Cliffs, 1992
[Kripke 63]
Kripke S., Semantic analysis of modal logic, I : normal propositional
calculi, Zeitschrift für mathematische Logik und Grundlagen der Mathematik 9:67-96, 1963
[Krueger 77]
Krueger M.W., Responsive environments, Proceedings NCC’77:375-385,
1977
[Krueger 83]
Krueger M.W., Artificial Reality, Addison-Wesley, New York, 1983
[Langton 86]
Langton C.G., Studying artificial life with cellular automata, Physica
D22:120-149, 1986
[Lavery 98]
Lavery R., Simulation moléculaire, macromolécules biologiques et systèmes complexes, Actes Entretiens de la Physique 3:1-18, 1998
[Le Moigne 77]
Le Moigne J-L., La théorie du système général : théorie de la modélisation, Presses Universitaires de France, Paris, 1977
[Lestienne 95]
Lestienne F., Equilibration, dans Encyclopædia Universalis 8:597-601,
Encyclopædia Universalis, Paris, 1995
[Lévy 95]
Lévy P., Qu’est-ce que le virtuel ?, Editions La Découverte, Paris, 1995
[Lévy-Leblond 96] Lévy-Leblond J-M., Aux contraires : l’exercice de la pensée et la pratique de la science, Gallimard, Paris, 1996
[Lienhardt 89]
Lienhardt P., Subdivisions of surfaces and generalized maps, Proceedings
Eurographics’89:439-452, 1989
[Luciani 85]
Luciani A., Un outil informatique de création d’images animées :
modèles d’objets, langage, contrôle gestuel en temps réel. Le système
ANIMA, Thèse de Doctorat, INPG Grenoble (France), 1985
[Luciani 00]
Luciani A., From granular avalanches to fluid turbulences through oozing pastes : a mesoscopic physically-based particle model, Proceedings
Graphicon’00 10:282-289, 2000
[Maes 95]
Maes P., Artificial life meets entertainment : interacting with lifelike
autonomous agents, Communications of the ACM 38(11):108-114, 1995
[Maffre 01]
Maffre E., Tisseau J., Parenthoën M., Virtual agents’ self-perception in
storytelling, Lecture Notes in Computer Science 2197:155-158, 2001
[Magnenat-Thalmann 88] Magnenat-Thalmann N., Laperriere R., Thalmann D., JointDependent Local Deformations for Hand Animation and Object Grasping, Proceedings Graphics Interface’88:26-33, 1988
96
Bibliography
[Magnenat-Thalmann 91] Magnenat-Thalmann N., Thalmann D., Complex Models for
Animating Synthetic Actors, IEEE Computer Graphics and Applications
11(5):32-44, 1991
[Mendes 93]
Mendes P., GEPASI: a software package for modelling the dynamics,
steady states and control of biochemical and other systems, Computer
Applications in the Biosciences 9(5):563-571, 1993
[Meyer 91]
Meyer J.A., Guillot A., Simulation of adaptive behavior in animats :
review and prospect, Proceedings From Animals to Animats’91 1:2-14,
1991
[Meyer 94]
Meyer J.A., Guillot A., From SAB90 to SAB94 : four years of animat
research, Proceedings From Animals to Animats’94 3:2-11, 1994
[Miller 95]
Miller D.C., Thorpe J.A., SIMNET : the advent of simulator networking,
Proceedings of the IEEE 83(8):1114-1123, 1995
[Minsky 00]
Minsky M., The emotion machine : from pain to suffering, Proceedings
IUI’00:187-193, 2000
[Moore 88]
Moore M., Wilhelms J., Collision detection and response for computer
animation, Computer Graphics 22(4):289-298, 1988
[Morin 77]
Morin E., La méthode, Tome 1 : la nature de la nature, Editions du
Seuil, Paris, 1977
[Morin 86]
Morin E., La méthode, Tome 3 : la connaissance de la connaissance,
Editions du Seuil, Paris, 1986
[Morton-Firth 98] Morton-Firth C.J., Bray D., Predicting temporal fluctuations in an
intracellular signalling pathway, Journal of Theoretical Biology 192:117128, 1998
[Mounts 97]
Mounts W.M., Liebman M.N., Qualitative modeling of normal blood coagulation and its pathological states using stochastic activity networks,
International Journal of Biological Macromolecules 20:265-281, 1997
[Nédélec 00]
Nédélec A., Reignier P., Rodin V., Collaborative prototyping in distributed virtual reality using an agent communication language, Proceedings IEEE SMC’00 2:1007-1012, 2000
[Noma 00]
Noma T., Zhao L., Badler N., Design of a Virtual Human Presenter,
Journal of Computer Graphics and Applications 20(4):79-85, 2000
[Noël 98]
Noël D., Virtuel possible, actuel probable (Combien sont les mo(n)des ?),
Proceedings GTRV’98:114-120, 1998
97
Bibliography
[Noser 95]
Noser H., Thalmann D., Synthetic vision and audition for digital actors,
Proceedings Eurographics’95:325-336, 1995.
[Nwana 96]
Nwana H.S., Software agents : an overview, Knowledge Engineering
Review 11(3):205-244, 1996
[Nwana 99]
Nwana H.S., Ndumu D.T., Lee L.C., Collis J.C., ZEUS : a toolkit for
building distributed multiagent systems, Applied Artificial Intelligence
13(1-2):129-185, 1999
[Omnès 95]
Omnès R., L’objet quantique et la réalité, dans [Cohen-Tannoudji 95]:
139–168, 1995
[Parenthoën 01] Parenthoën M., Tisseau J., Reignier P., Dory F., Agent’s perception
and Charactors in virtual worlds : put Fuzzy Cognitive Maps to work,
Proceedings VRIC’01:11-18, 2001
[Parunak 99]
Parunak H.V.D., Industrial and practical applications of DAI, dans
[Weiss 99]:377-421, 1999
[Perlin 95]
Perlin K., Goldberg A., Improv : a system for scripting interactive actors in virtual worlds, Computer Graphics 29(3):1-11, 1995
[Perrin 01]
Perrin J. (sous la direction de), Conception entre science et art : regards
multiples sur la conception, Presses Polytechniques et Universitaires Romandes, Lausanne, 2001
[Phong 75]
Phong B.T., Illumination for computer generated pictures, Communications of the ACM 18(8):311-317, 1975
[Pitteway 80]
Pitteway M., Watkinson D., Bresenham’s algorithm with grey scale,
Communications of the ACM 23(11):625-626, 1980
[Platt 81]
Platt S., Badler N., Animating facial expressions, Computer Graphics
15(3):245-252, 1981
[Pope 87]
Pope A.R., The SIMNET Network and Protocols, Report 6369, BBN
Systems and Technologies, 1987
[Quéau 93a]
Quéau Ph., Televirtuality: the merging of telecommunications and virtual reality, Computers and Graphics 17(6):691-693, 1993
[Quéau 93b]
Quéau Ph., Le virtuel, vertus et vertiges, Collection Milieux, Champ
vallon, Seyssel, 1993
[Quéau 95]
Quéau Ph., Le virtuel : un état du réel, dans [Cohen-Tannoudji 95]:6193, 1995
98
Bibliography
[Querrec 01a]
Querrec R., Reignier P., Chevaillier P., Humans and autonomous agents
interactions in a virtual environment for fire fighting training, Proceedings VRIC’01:57-64, 2001
[Querrec 01b]
Querrec R., Chevaillier P., Virtual storytelling for training : an application to fire fighting in industrial environment, Lecture Notes in Computer Science 2197:201-204, 2001
[Raab 79]
Raab F.H., Blood E.B., Steiner T.O., Jones R.J., Magnetic position
and orientation tracking system, IEEE Transaction on Aerospace and
Electronic Systems 15(5):709-718, 1979
[Reeves 83]
Reeves W.T., Particle systems : a technic for modelling a class of fuzzy
objects, Computer Graphics 17(3):359-376, 1983
[Renault 90]
Renault O., Magnenat-Thalmann N., Thalmann D., A vision-based approach to behavioural animation, Journal of Visualization and Computer
Animation 1(1):18-21, 1990
[Requicha 80]
Requicha A.A.G., Representations for rigid solids: theory, methods and
systems, ACM Computing Surveys 12(4):437-464, 1980
[Reignier 98]
Reignier P., Harrouet F., Morvan S., Tisseau J., Duval T., ARéVi : a
virtual reality multiagent platform, Lecture Notes in Artificial Intelligence 1434:229-240, 1998
[Reynolds 87]
Reynolds C.W., Flocks, Herds, and Schools: A Distributed Behavioral
Model, Computer Graphics 21(4):25-34, 1987
[Rickel 99]
Rickel J., Johnson W.L., Animated agents for procedural training in virtual reality : perception, cognition, and motor control, Applied Artificial
Intelligence 13:343-382, 1999
[Rodin 99]
Rodin V., Nédélec A., oRis : an agent communication language for
distributed virtual environment, Proceedings IEEE ROMAN’99:41-46,
1999
[Rodin 00]
Rodin V., Nédélec A., A multiagent architecture for distributed virtual
environments, Proceedings IEEE SMC’00 2:955-960, 2000
[Rosenblum 91] Rosenblum R.E., Carlson W.E., Tripp E., Simulating the structure and
dynamics of human hair : modelling, rendering and animation, Journal
of Visualization and Computer Animation 2(4):141-148, 1991
[Rumbaugh 99] Rumbaugh J., Jacobson I., Booch G., The Unified Modelling language
Reference Manual, Addison-Wesley, New-York, 1999
[Samet 84]
Samet H., The quadtree and related hierarchical data structures, ACM
Computing Surveys 6(2):187-260, 1984
99
Bibliography
[Schaff 97]
Schaff J., Fink C., Slepchenko B., Carson J., Loew L., A general computational framework for modeling cellular structure and function, Biophysical Journal 73:1135-1146, 1997
[Schuler 93]
Schuler D., Namioka A. (éditeurs), Participatory Design : Principles
and Practices, Lawrence Erlbaum Associates, Hillsdale, 1993
[Schultz 96]
Schultz C., Grefenstette J.J., Adams W.L., RoboShepherd : learning a
complex behavior, Proceedings RoboLearn’96:105-113, 1996
[Schumacker 69] Schumacker R., Brand B., Gilliland M., Sharp W., Study for applying computer-generated images to visual simulation, Technical Report
AFHRL-TR-69-14, U.S. Air Force Human Resources Lab., 1969
[Sederberg 86]
Sederberg T.W., Parry S.R., Free-form deformation of solid geometric
models, Computer Graphics 20:151-160, 1986
[Sheng 92]
Sheng X., Hirsch B.E., Triangulation of trimmed surfaces in parametric
space, Computer Aided Design 24(8):437-444, 1992
[Sheridan 87]
Sheridan T.B., Teleoperation, telepresence, and telerobotics : research
needs, Proceedings Human Factors in Automated and Robotic Space
Systems’87:279-291, 1987
[Shoham 93]
Shoham Y., Agent-oriented programming, Artificial Intelligence 60(1):
51-92, 1993
[Sillion 89]
Sillion F., Puech C., A general two-pass method integrating specular and
diffuse reflection, Computer Graphics 23(3):335-344, 1989
[Sims 94]
Sims K., Evolving 3D morphology and behavior by competition, Artificial
Life 4:28-39, 1994
[Singh 94]
Singh G., Serra L., Ping W., Hern N., BrickNet: a software toolkit for
network-based virtual worlds, Presence 3(1):19-34, 1994
[Smith 80]
Smith R.G., The Contract Net protocol : high level communication and
control in a distributed problem solver, IEEE Transactions on Computers
29(12):1104-1113, 1980
[Smith 89]
Smith T.J., Smith K.U., The human factors of workstation telepresence, Proceedings Space Operations Automation and Robotics’89:235250, 1989
[Smith 97]
Smith D.J., Forrest S., Ackley D.H., Perelson A.S., Using lazy evaluation
to simulate realistic-size repertories in models of the immune system,
Bulletin of Mathematical Biology 60:647-658, 1997
100
Bibliography
[Sorensen 99]
Sorensen E.N., Burgreen G.W., Wagner W.R., Antaki J.F., Computational simulation of platelet deposition and activation: I. Model development and properties, Annals of Biomedical Engineering 27(4):436-448,
1999
[Sowa 91]
Sowa J.F., Principles of semantic network : exploration in the representation of knowledge, Morgan Kaufmann Publisher Inc., San Mateo,
1991
[Steketee 85]
Steketee S.N., Badler N.I., Parametric keyframe interpolation incorporating kinetic adjustment and phrasing, Computer Graphics 19(3):255262, 1985
[Stewart 89]
Stewart J., Varela F.J., Exploring the connectivity of the immune network, Immunology Review 110:37-61, 1989
[Sutherland 63] Sutherland I.E., Sketchpad, a man-machine graphical communication
system, Proceedings AFIPS’63:329-346, 1963
[Sutherland 68] Sutherland I.E., A head-mounted three-dimensional display, Proceedings
AFIPS’68, 33(1):757-764, 1968
[Sutherland 74] Sutherland I.E., Sproull R., Schumaker R., A characterization of ten
hidden-surface algorithms, Computing Surveys 6(1):1-55, 1974
[Tachi 89]
Tachi S., Arai H., Maeda T., Robotic tele-existence, Proceedings NASA
Space Telerobotics’89 3:171-180, 1989
[Terzopoulos 87] Terzopoulos D., Platt J., Barr A., Fleischer K., Elastically deformable
models, Computer Graphics 21(4):205-214, 1987
[Terzopoulos 88] Terzopoulos D., Fleischer K., Modeling inelastic deformation: viscoelasticity, plasticity, fracture, Computer Graphics 22(4):269-278, 1988
[Terzopoulos 94] Terzopoulos D., Tu X., Grzeszczuk R., Artificial fishes: autonomous
locomotion, perception, behavior, and learning in a simulated physical
world, Artificial Life 1(4):327-351, 1994
[Thalmann 96]
Thalmann D., A new generation of synthetic actors : the interactive
perceptive actors, Proceedings Pacific Graphics’96:200-219, 1996
[Thalmann 99]
Thalmann D., Noser H., Towards autonomous, perceptive and intelligent
virtual actors, Lecture Notes in Artificial Intelligence 1600:457-472, 1999
[Thomas 86]
Thomas S.N., Dispersive refraction in ray tracing, The Visual Computer
2(1):3-8, 1986
[Tisseau 98a]
Tisseau J., Chevaillier P., Harrouet F., Nédélec A., Des procédures aux
agents : application en oRis, Rapport interne du LI2, ENIB, 1998
101
Bibliography
[Tisseau 98b]
Tisseau J., Reignier P., Harrouet F., Exploitation de modèles et Réalité
Virtuelle, Actes des 6èmes journées du groupe de travail Animation et
Simulation 6:13-22, 1998
[Tolman 48]
Tolman E.C., Cognitive maps in rats and men, Psychological Review
42(55):189-208, 1948
[Tomita 99]
Tomita M., Hashimoto K., Takahashi K., Shimizu T.S., Matsuzaki Y.,
Miyoshi F., Saito K., Tanida S., Yugi K., Venter J.C., Hutchison C.A.,
E-CELL: software environment for whole-cell simulation, Bioinformatics
15:72-84, 1999
[Turing 50]
Turing A., Computing machinery and intelligence, Mind 59:433-460,
1950
[Vacherand-Revel 01] Vacherand-Revel J., Tarpin-Bernard F., David B., Des modèles de
l’interaction à la conception participative des logiciels interactifs, dans
[Perrin 01]:239-256, 2001
[Vaughan 00]
Vaughan R., Sumpter N., Henderson J., Frost A., Cameron S., Experiments in automatic flock control, Journal of Robotics and Autonomous
Systems 31:109-117, 2000
[Vertut 85]
Vertut J., Coiffet P., Les robots, Tome 3 : téléopération, Hermès, Paris,
1985
[Von Neumann 66] Von Neumann J., Theory of self-reproducing automata, University Of
Illinois Press, Chicago, 1966
[Weil 86]
Weil J., The synthesis of cloth objects, Computer Graphics 20(4):49-54,
1986
[Weiss 99]
Weiss G. (éditeur), Multiagent systems : a modern approach to distributed artificial intelligence, MIT Press, Cambridge, 1999
[Wenzel 88]
Wenzel E.M., Wightman F.L., Foster S.H., Development of a three dimensional auditory display system, Proceedings CHI’88:557-564, 1988
[Whitted 80]
Whitted T., An improved illumination model for shaded display, Communications of the ACM 23(6):343-349, 1980
[Williams 78]
Williams L., Casting curved shadows on curved surfaces, Computer
Graphics 12(3):270-274, 1978
[Wilson 85]
Wilson S.W., Knowledge growth in an artificial animal, Proceedings
Genetic Algorithms and their Applications’85:16-23, 1985
[Wolberg 90]
Wolberg G., Digital image warping, IEEE Computer Society Press, 1990
102
Bibliography
[Wolfram 83]
Wolfram S., Statistical mechanics of cellular automata, Review of Modern Physics 55(3):601-644, 1983
[Wooldridge 95] Wooldridge M., Jennings N.R., Intelligent agents : theory and practice,
The Knowledge Engineering Review 10(2):115-152, 1995
[Wooldridge 01] Wooldridge M., Ciancarini P., Agent-oriented software engineering : the
state of the art, à paraı̂tre dans Handbook of Software Engineering and
Knowledge Engineering, ISBN:981-02-4514-9, World Scientific Publishing Co., River Edge, 2001
[Zeltzer 82]
Zeltzer D., Motor control techniques for figure animation, IEEE Computer Graphics and Applications 2(9):53-59, 1982
[Zelter 92]
Zelter D., Autonomy, interaction and presence, Presence 1(1):127-132,
1992
[Zimmerman 87] Zimmerman T., Lanier J., Blanchard C., Bryson S., A hand gesture
interface device, Proceedings CHI’87:189-192, 1987
[Zinkiewicz 71]
Zinkiewicz O.C., The finite element method in engineering science, Mc
Graw Hill, New York, 1971
103
© Copyright 2026 Paperzz