Real Time Animation and Illumination in Ancient Roman Sites

Real Time Animation and Illumination in Ancient
Roman Sites
Nadia Magnenat-Thalmann, MIRALab, CUI, University of Geneva
Alessandro Enrico Foni, MIRALab, CUI, University of Geneva
Georgios Papagiannakis, MIRALab, CUI, University of Geneva
Nedjma Cadi-Yazli, MIRALab, CUI, University of Geneva
Abstract—The present article discusses and details the
methodological approaches and the reconstruction strategies
that have been employed to realize the 3D real-time virtual
simulations of the populated ancient sites of Aspendos and
Pompeii, respectively visualized using a virtual and an
augmented reality setup. More specifically, the two case
studies to which we refer concern the VR restitution of the
Roman theatre of Aspendos in Turkey, visualized as it was in
the 3rd century, and the on-site AR simulation of a digitally
restored Thermopolium situated at the archeological site of
Pompeii in Italy. In order to enhance both simulated 3D
environments, either case study presents the inclusion of real
time animated virtual humans, which are re-enacting
situations and activities that were typically performed in such
sites during ancient times. Furthermore, the implemented
modeling and real time illumination strategies of both the
textured static 3D models of the sites and the simulated
dynamic virtual humans are equally presented and compared.
Keywords—Cultural Heritage, Illumination models, Mixed
Reality, Real time Animation, Virtual Humans, Virtual
Reality
1. INTRODUCTION
The activities that participate in the protection of our
common cultural heritage are not exclusively confined to
the physical preservation of ancient sites against the
threatening effects exerted by external hindering factors
and agents, such as weather, climatic changes or the effects
of the passing of time. Thus, we believe that both the
dissemination of pertinent information that might promote
a better understating of the contextualized social role that
such places occupied in the ancient societies, and the
scientific assistance that 3D visualization technologies
might offer to further the comprehension of the historical
hypothesis related to such sites, are essential contributors to
the protection of such heritage. Consequently, in order to
achieve a coherent and historically meaningful virtual
visualization of any heritage site, it is crucial not only to
proceed to the preparation of scientifically correct
representation of its architectural qualities, but it is equally
essential to present the relations that existed between the
physical site, its social function and the people that used to
live at the time in which it was still in use.
In such respect, it is hence primordial to extend the
traditionally conceived virtual heritage experiences with
the inclusion of historically plausible virtual humans
reenacting activities typically preformed in the time
dependant context of the restituted site, thus adding a
human dimension to otherwise sterile simulations devoid of
any social dimension. In this presentation we will illustrate
two case studies featuring the inhabited real time restitution
of two heritage sites, visualized as they were during the
Roman period, exhibiting the inclusion of contextualized
storytelling scenarios in order to further the immersion of
the user and promote a better understanding of the social
function that such structures were occupying in their
contemporary societies. The two case studies that will be
employed to illustrate our methodology are based on the
results of the EU founded projects ERATO [1] and
LIFEPLUS [2]. Since the ERATO project features a full
VR simulation and LIFEPLUS an AR setup for an on site
experience, we demonstrate a different technological
implementation of the same common methodological
approach that happen to be both the backbone of our virtual
heritage application creation pipeline and the main topic of
this presentation.
1.1 Overview
After a brief introduction on the general subject of the
article and a quick overview of the previous work related to
the specific topic covered by our presentation, we will
detail the data gathering process that has been carried out in
the first stages of the development of our presented case
studies (section 2). Thus, we will present the collected
material and the historical sources that were employed to
assist the 3D modeling, mesh creation and texturing of all
the elements constituting the restituted 3D models of the
considered sites (section 3) and of the virtual humans
inhabiting such spaces (section 4). In section 5, we
illustrate the techniques associated with the preparation of
the interactive elements that participate in the real time
reenactment of the activities performed by the animated
virtual humans, such as body motion, facial animation,
sound, music and scenario based interaction. The details
regarding the rendering and lighting strategies implemented
in the real time simulations are introduced in section 6,
while the specific implementation of the prepared elements
in a VR or AR setup is discussed in section 7, where the
specificities, advantages and drawbacks of either approach
are presented. Finally, in section 8, we illustrate the results
of our VR-AR simulations accompanied by some final
considerations on the performances of both setups and on
the future work that might be required to further enhance
such experiences (section 9).
1.2 Previous work
Virtual and Augmented Reality-based heritage
experiences shall ideally give the visitor the opportunity to
feel that they are present at significant places and times in
the past, stimulating a variety of senses in order to allow
the experiencing of what it would have felt like to be there.
However, a review of the range of projects described as
Virtual Heritage [3] [4] [5] [6] [7] shows numerous
examples of virtual reconstructions, virtual museums and
immersive virtual reality systems built as reconstructions of
historic sites, but with limited inclusion of virtual life.
Engaging characters that are needed in an interactive
experience are now slowly coming into focus with recent
EU funded IST projects [8]. The main reason for their slow
adoption is due to a) the incapability of current VR-AR
rendering technology for realistic, entertaining, interactive
and engaging synthetic characters and b) lack of interesting
interaction paradigms for character-based installations c)
lack of effective frameworks for multidisciplinary
synergies and common understanding between historians,
architects, archaeologists and computer scientists. In this
article we attempt to address such issues through the
presentation of a complete methodology that has been
applied to recreate 3D interactive real time virtual and
augmented reality simulations with the inclusion of realistic
animated virtual human actors exhibiting personality,
emotion, body and speech simulations, and re-enacting
staged storytelling scenarios. Although initially targeted at
Cultural Heritage Sites, the illustrated methodology is by
no means exclusively limited to such subjects, but can be
extended to encompass other types of applications. Hence,
we stress that with the assistance of a modern real time VRAR framework suited for the simulation and visualization
of interactive animated and compelling virtual characters,
such virtual experiences can be significantly enhanced and
that the presented content might greatly benefit from the
inclusion of a dramatic tension achieved through the
implementation of a scenario based interaction. The
abandonment of the traditional concepts usually associated
with the simulation of static virtual worlds, which mainly
feature limited interaction and simplified event
representations, for the adoption of an interactive
historically coherent character-based event representation is
the prime subject and main concern of this presentation.
2. HISTORICAL AND ARCHEOLOGICAL SOURCES
In order to succeed a scientifically valid virtual
restitution of a given heritage site, one of the most critical
aspects of the preliminary data collection phase is the
gathering of pertinent and reliable historical sources, both
concerning the studied edifices and the social aspects
structuring the daily life of the people that used to live at
such times. Since in archeology related virtual
reconstruction projects the amount of available
architectural data is generally limited, due to the fact that
parts of the concerned structures are often now missing, it
is therefore necessary to proceed to the formulation of
restitution hypothesis in order to constitute a coherent
simulation of the visual qualities of the sites targeted by the
visualization attempts. Consequently, a close collaboration
with external advisors providing pertinent expertise,
combined with the use of modern 3D visualization tools
and technologies as a mean to test, study and validate
specific structural hypothesis, is an essential requirement to
achieve a plausible representation of the appearance that
the simulated sites might have had at a specific time in the
past. In this section we will briefly introduce some
historical data concerning the two sites constituting our
case studies and we will present the different sources that
were employed as a base for the realization of our VR and
AR real time simulations.
2.1 The Theatre of Aspendos
The theatre of Aspendos (Fig. 1) is the best preserved
roman theatre in Asia Minor, and it is calculated that it was
built around 161-180 A.D. during the reign of emperor
Marcus Aurelius. Certain remains in the theatre of
Aspendos and oral sources imply that the theatre was used
as a caravanserai (Merchants' Inn) when in 1078,
Pamphylia was claimed by the Seljuk Turks. In 1392
ottoman reign sat in and, by the declaration of the Republic
of Turkey in 1923, Aspendos took its place within the
boarders of the Turkish province of Antalya.
Fig. 1. The Aspendos Theatre in Antalya.
It was only after the rediscovery of the theatre by
European archaeologists in the 19th century, that an
architectural evaluation of the structure became an issue
and that the building caught the attention of scientists such
as Lanckoronski [9] and Texier [10].
2.2 Lucius Placidus Thermopolium in Pompeii
The ancient city of Pompeii, which is located in western
Italy in the Campania region near Naples, is one of the
world's best known Roman archaeological sites. Its
notoriety comes both from the tales related to its
spectacular and terrible destruction and from the
extraordinary conservation state of its remaining, both
consequential to Mount Vesuvius devastating eruption that
happened the 24th of august 79 AD. The Thermopolium
building that has been chosen for the on-site AR simulation
experiment in the frame of the LIFEPLUS project (Fig. 2)
is one of the better preserved and most decorated taverns
still existing at the Pompeii’s site.
Fig. 2. Lucius Vetutius Placidus Thermopolium in Pompeii.
According to the inscriptions painted on the front of the
building and the tituli picti of the amphorae found in the
garden, it has been established that the name of the
shopkeeper of the Thermopolium, and also owner of the
adjacent communicating house, was a certain Lucius
Vetutius Placidus.
2.3 Sources employed to restitute the sites
In order to restitute an accurate representation of the
studied sites in their historically time-dependant context,
several sources where used to assist and support the virtual
reconstruction effort. The employed base material extends
from architectural plans and sections used as reference for
the 3D reconstruction of the edifices, to detailed
topographical and geological data to represent at a larger
scale the environment in which the sites are located, to
hand drawn archeological restitution studies (samples
shown in Fig. 3) to assist the formulation of solid
hypothesis regarding the visualization of the missing
structural elements that were present in the past and are
now lost. However, in the case of large sites, the necessity
to widen and cross-check the gathered paper-based
architectural and archeological data arises from the fact that
such documentation generally exhibits some precision
related issues.
Fig. 3. Architectural plan (left) and hand drawn restitution (right) of the
Aspendos Theatre Building [9].
Thus, on-site surveys of the studied sites have been
conducted to collect complementary measurements and
high-definition digital video and photographic data to be
used as reference and support for the 3D modeling phase
and as a base for the extraction and digital restoration of the
diffuse textures to be applied to the modeled surfaces of the
restituted buildings. Finally, where no pertinent data
whatsoever could be found on specific missing structural
elements, after having sought external assessment and
archeological advice, comparable elements still existing on
similar sites have been used as main references to complete
our 3D virtual restitutions.
2.4 Sources used to create historically coherent virtual
humans
To achieve a convincing virtual representation of the
visual appearance and social behaviors that people were
exhibiting during a past historical period, it is necessary to
consider the different societal and functional aspects that
were defining their daily life. Such elements may range
from the identification of specific codified social conducts,
rules, or particular dress codes, such as the ones that could
be related to their clothing or hairstyling culture and
practices (samples shown in Fig. 4), to the creation of
virtual models and animated scenarios that correctly
represent the daily habits that were typical of the studied
site at a given historical period. Hence, a highly
interdisciplinary approach has been deployed in order to
gather all the significant data required to fully cover the
extent of the information necessary to succeed a
historically coherent 3D restitution of the virtually
inhabited heritage sites targeted by our case studies.
Fig. 4. Roman hairstyles and outfits of the 2nd century.
Thus, in order to assist the data collection process whilst
referring wherever possible to any accessible pertinent
written or visual data available in literature, published
studies or public museums, such as drawings, sketches,
frescos, engravings, iconographic or photographic material,
further complementary external historical, archeological
and sociological expertise has equally been sought.
3. 3D GEOMETRICAL RESTITUTION OF THE SITES
The main technical challenge that the presented case
studies propose is to achieve a historically plausible and a
visually pleasing representation of the appearance of the
concerned sites in a 3D interactive real time environment,
whilst maintaining a sufficient degree of geometrical
precision to ensure a faithful restitution of the architectural
qualities of the studied heritage items. Since the amount of
geometrical detail, and thus the total number of polygons
that can be used to render the geometrical complexity of a
give model, ought to consider the available hardware
limitations of the hosting platforms and the inherent
constraints of any real time application, a careful
preparation and optimization of the modeled 3D meshes is
necessary in order to ensure real time animation and
interaction capabilities to the final simulation.
Consequently, it is of prime importance to find a suitable
compromise in the trade-off that exists between precision,
performances and visual impact of the simulations, in order
to guarantee an optimum balance in all the elements that
participate in the virtual restitutions. Hence, memory
overload related issues or performance drops due to an
excessive weight of either the loaded files or the visualized
models need to be equally considered during the modeling
phases of either the static buildings and the virtual humans,
as well as during the texturing, illumination and animation
stages. In this section we will thus illustrate such concerns
trough the presentation of the strategies that have been used
in our case studies. More specifically, we will exemplify
our design choices with the presentation of a few examples
pertaining to the Aspendos theatre simulation. Although the
implemented methodology extends to both case studies, the
Aspendos heritage site better illustrates our approach since
it presents an extreme geometrical complexity and a
notable amount of architectural detail that requires a more
pronounced application of such strategies.
3.1 3D model of the external environments
In order to model the Aspendos site environment,
namely the hill upon which the Theatre is built and its
immediate surroundings, accurate topographic data was
provided in the form of a 2D elevation map of the area
featuring elevation lines every 1 meter. Since such raw data
alone does not allow a straightforward creation of an
accurate textured 3D mesh of the terrain, the following
methodology has been implemented in order to produce the
final 3D model of the virtual environment. In a first stage
of the pipeline, the elevation lines present on the
topographic map of the area have been isolated from the
other structures, as for instance roads and buildings, and
have been imported as a sole block from the provided CAD
file into the 3D Studio MAX software package. Then, the
imported splines have been separated, regrouped and
placed in the 3D space by their height relative to the lowest
elevation line. The repositioned splines were then used to
build a high polygon 3D mesh (80.000 polygons) from
which a grayscale elevation map has been extracted to be
used to procedurally generate a multilayered diffuse texture
employing the Terragen terrain modeling software [33] by
means of distribution parameters such as relative height of
the polygons, their orientation and their slope. In order to
meet the needs of a real time simulation, a simplified
version of the 3D model of the terrain (4.500 polygons) has
also been prepared through automatic optimization and
manual refining to be mapped with the previously prepared
procedural texture which incorporated most of the
geometrical detail of the high polygon mesh (Fig. 5 top).
Fig. 5. Preparation of the procedurally textured low polygon mesh of the
Aspendos terrain (top), illumination baked textures of the site (bottom).
To enhance the visual quality of the 3D surface of the
terrain, in a latter stage of the project the procedural texture
applied to the model has been replaced with a texture map
representing the current appearance of the modern site.
Such texture has been created using a combination of
several satellite images that have been manually edited,
color corrected and shaded using an occlusion map and a
rendered shadow pass of the direct light in the 3D scene
(i.e. the hard shadow cast by the virtual sun). Finally, the
illumination baked texture representing the modern
environment has been modified and edited to remove all
the modern elements, such as the parking lot, to produce
the Roman version of the texture to be used in the final
simulation (Fig. 5 bottom).
To further increase the visual impact of the restituted
environment, the addition of trees and vegetation has been
included with the implementation of simple billboard
techniques as illustrated in Fig. 6. Such billboards,
representing a total of more than 30 different types of trees,
have been prepared using as starting point real images of
trees and bushes that have been edited to isolate their shape
from their original background, and to add an alpha
channel to allow their visualization with transparency
effects.
Fig. 6. Final model of the Aspendos environment with the inclusion of
billboards to render the trees.
However, the choice of the type of trees to include in the
simulation and their placement is only an extrapolation
based on the direct observation of the modern site, since
unfortunately no historical data on the vegetation present at
Roman times was available.
3.2 Modeling and texturing of the buildings
For the realization of the 3D model of the Aspendos site,
polygonal modeling techniques were preferred. Critical
parts of the model, such as the Cavea, due to its extremely
complex geometrical features, were given special attention
to keep an acceptable trade-off between visual accuracy
and the performance of the real time simulation. Moreover,
wherever possible, the model’s surfaces were defined as
single sided objects in order not to burden the final meshes
with an excessive amount of hidden polygons.
Furthermore, in order not to compromise the speed of the
simulation, due to an overload caused by an excessively
dense mesh, all the carved decorations of the stage building
are rendered with the use of textures, since their
geometrical features are negligible compared to the scale of
the building and the scope of the simulation (samples
shown in Fig. 7).
the application of selective color suppression methods, the
color cast caused by the varying illumination present
during the data gathering process has been removed and
neutralized to allow a consistent relighting of the virtual
surfaces of the real time animated model using colored
lightmaps. However, unlike the process applied in [8], for
the Aspendos site virtual restoration was not considered:
the edifice features in fact little, or none, colored decorative
motives, and generally the surface materials consist of
stone pattern variations.
3.3 Reconstruction hypothesis
In order to visualize the studied sites in their fully
restituted state, it is necessary to carefully consider the
possible reconstruction hypothesis concerning the elements
that were present in the past and that are now no longer
existing or available. In the case of the Aspendos site, for
instance, elements such as the Velum covering the theatre,
the roofing of the secondary buildings, as well as a
significant difference in the level of the ground, since part
of the building is still buried, had to be pondered before
proceeding to the constitution of the final 3D model of the
site. The preparation of the 3D model of the Velum
covering the theatre thus required special attention during
the modeling phase: in fact, two contradicting sources were
available (as illustrated by Fig. 8), namely the restitution
plans by Izenour [11] and the ones by de Bernardi [12].
Hence, to define a plausible virtual restitution, the
hypothesis presented by Izenour was kept as reference
because his assumption, based upon a parallel with the
Rome Coliseum Velum, succeeds in explaining all the
ropes attach points still existing at the exterior of the
theatre. After having sought external archeological and
historical expertise, a modified version of Izenour
interpretation has finally been implemented in order to
address a further issue concerning the positioning of the
lateral sections of the Velum: such hypothesis would have
in fact implied a practical impossibility to operate the
moving ropes used to open and close such sections from
the roof of the secondary buildings.
Fig. 7. Detail of the 3D Aspendos scene building with and without
textures.
In order to create the texture tiles to be mapped on the
3D geometry of the restituted theatre, the creation of a
library of the main identified diffuse material textures
present on the site has been made using as a base the onsite taken high resolution digital photographs: 150 different
color corrected textures that can be tiled seamlessly on the
objects of the 3D scene have therefore been prepared and
assigned to the model’s surfaces. During the color
correction phase, the diffuse tiles have undergone several
modifications to adjust their saturation, contrast and hue to
similar levels in order to prevent visual discontinuities
between neighboring textured surfaces. Furthermore, with
Fig. 8. Aspendos Velum plan by Izenour (center left) and by de Bernardi
(center right).
Moreover, the formulation of a suitable restitution
hypothesis concerning the roof of the side structures of the
theatre, arising from the presence of a height discrepancy in
the available restitution studies between such structures and
the adjacent Cavea roof, has been equally addressed with
the creation of a test set of more than 12 different 3D
virtual reconstructions illustrating the more plausible and
viable hypothesis.
Fig. 9. Samples from the Aspendos roof testing set: lower scene roof and
steep height change between lateral sections (left), higher scene roof and
progressive height change in the roof of the lateral buildings (right).
Through direct experimentation and inspection of the 3D
models, a subset of propositions deemed as the more likely
from the structural and practical point of view have been
compared to restrict the final choice to a only a few
possibilities (samples from the test set shown in Fig. 9).
Subsequently, the chosen propositions have been presented
to external advisors and thus a final version for the 3D
restitution of such structural element of the theatre has been
implemented.
3.4 Illumination strategies
To further enhance the visual impact of the real time
simulation of the concerned sites, the addition of diffuse
and hard shadows cast by the sun plays a central role. At an
earlier stage of the virtual restitutions a full texture baking
approach of a pre-computed radiosity solution has therefore
been used. This approach however implies the creation of
one texture per object: the bigger is the object, the higher
the resolution of the generated texture has to be in order to
avoid visual discontinuities, hence the risk to overload the
video memory, either with excessively high resolution
textures or by too many generated textures, is addressed.
overload issue. In order to visually simulate a more
convincing virtual illumination of the Aspendos site, the
use of High Dynamic Range Image Based Lighting has
been implemented. However, since the Aspendos virtual
model had to be simulated under specific lighting
conditions and at specific dates and hours of the day, the
use of virtually generated light probes, allowing an
arbitrary positioning of the direct sunlight, by means of
parameters such as location, date and time, has been
implemented to compute a global illumination solution for
the creation of the lightmaps to be used for the real time
simulation (Fig. 10). Such approach allows, in fact, the
dissociation of the lighting information from the material
textures, and consequently allows for an overall better
visual quality of the real time rendered surfaces and a
reduction of the total weight of all the textures loaded into
memory to perform the simulation (Results shown in Fig.
11).
Fig. 11. Real time simulated model of the Aspendos theatre illuminated
using a light map-diffuse map approach.
In order to further optimize the performance of the real
time application, all the lightmaps have been also downsampled to a size of 256x256 pixels and have been blurred
to remove possible compression artifacts due to their small
size in comparison to the spatial extension of their assigned
surfaces.
4. 3D VIRTUAL ACTORS
Fig. 10. Aspendos theatre rendered using a virtual light probe as input for
the global illumination computation.
To overcome these restrictions the adoption of a lightmap and diffuse-map real time multi-texturing approach
has been adopted due to the fact that the diffuse textures
can be tiled, and thus their resolution becomes independent
from the size of the object upon which they are mapped.
Secondly the light maps can be downsized to very low
resolutions without compromising the overall visual impact
of the simulation, therefore eliminating the video memory
While attempting a comprehensive virtual restitution of
an archaeological or historical site, the range of relevant
dimensions that ought to be considered should not be
exclusively limited to the representation of its mere
physical structure, but shall extend as well to social and
local historical aspects. Thus, the addition of historically
consistent virtual humans into virtual reconstructions both
allows for a better understanding of the use of the
architectural structures present on the site, and permits the
creation of more realistic simulations with the inclusion of
specific ambiences, atmospheres or dramatic elements.
Hence, the choice to stage a real time virtual reenactment
of a Roman play in the simulated Aspendos Theatre, and a
daily life scenario to be performed at the augmented
Termopolium in Pompeii, has been made to enhance our
simulations with the inclusion of such dimensions. After
having gathered all the cultural and historical information
and sources, the preparation of the 3D virtual actors, with
their corresponding musical instruments, masks, garments
and props, has been carried out according to the specific
restrictions inherent to any real time simulation. The
following section gives an overview of the different phases
that were necessary to complete the modeling and
animation of virtual human models that have been
implemented into our virtual and augmented reality case
studies.
4.1 Geometrical modeling of the actor’s 3D meshes
In order to create the animated virtual humans, the
definition of the 3D meshes and the design of the skin
surfaces of their body models have been conducted
employing automatic generation methods, using real world
measurement data, coupled with manual editing and
refining. The main approach that was implemented in our
case studies utilizes an in-house system based on examples
[13]. The benefits of such method are threefold: first, the
resolutions of the generated models are adapted for real
time animation purposes; second, different bodies can be
generated easily and rapidly modifying the base parameters
of the template models (basic setup presented in Fig. 12);
third, since the implemented system uses real world data as
reference by conforming a template model onto a scanned
model, the generated meshes visually provide realistic
results.
refinement phase then iteratively improves the fitting
accuracy by minimizing the shape difference between the
template and the scanned data.
Fig. 13. Skeleton fitting and fine skin refinement.
Since the implemented system employs a template body
which is based on a 3D model that has been specifically
designed for real time animation purposes, the final model
inherits and encapsulates all the required structural
components, such as bone and skin attachment data, thus
allowing the generated virtual human to be animated at any
time during the process, through skin attachment
recalculation, and to exhibit real time animation ready
capabilities.
4.2 Creation of the 3D garments
To re-create the visual appearance exhibited by ancient
clothes, source data such has frescos, mosaics and statues,
coupled with written descriptions and complementary
historical information concerning the garments, fabrics,
colors and accessories has been used as a base to dress the
virtual humans (Fig. 14). The virtual garment assembly is
executed using an in-house platform that employs general
mechanical and collision detection schemes to allow the
rapid 3D virtual prototyping of multiple interacting
garments [15][16].
Fig. 12. Skeleton hierarchy (a), template model (b), skinning setup (c).
The main assumption underlying such methodology is
that any body geometry can be either obtained or
approximated by deforming a template model. In order to
succeed the necessary fitting process, a number of existing
methods, such as [14], could be used effectively. However,
the basic idea that has been adopted in the present case is
based on a feature based approach, as presented in [13],
where a set of pre-selected landmarks and feature points is
used to measure the fitting accuracy and guide the
automatic conformation. The implemented algorithm that
drives such process presents two main phases: the skeleton
fitting and the fine refinement (depicted in Fig. 13). The
first stage of the process linearly approximates the posture
and proportion of the scanned body model by conforming
the template mesh to the scanned data through skeletondriven deformation and, employing feature points, it
defines the joint parameters that minimize the distance
between corresponding feature locations. The fine
Fig. 14. Creation of virtual garment.
The implemented system integrates the design of 2D
patterns, the physical simulation of fabrics around animated
virtual human bodies, and features comfortability
evaluation and real time animation and visualization
capabilities of the simulated results. The collected data,
such as the patterns of different cloths and the gathered
historical visual references and descriptions, have been
used to restitute the different types of outfits that have been
applied to the simulated virtual humans. The creation of a
3D geometrical representation of an entire garment suited
for real time animation purposes requires several stages.
First, the creation of the basic patterns constituting the
main elements of the garment is made in a 2D space
employing as modeling reference the silhouette of the
target 3d body. Second, the obtained patterns are placed
around the target virtual body and seamed in a 3D space.
ancient musical instruments real-world references were
employed, which were built in the frame of an
experimental archeology experience carried out by the
musicology department of YTU.
Fig. 15. Cloth simulation of a Roman tunic (left) and Chiton (right).
Third, once the texturing and garment physical
properties definition phases are complete, the interactive
fitting process of the virtual garment is started employing a
mechanical simulation featuring advanced collision
detection and reparation capabilities [16] (Fig. 15) which
forces the modeled surfaces to approach along the seam
lines and to deform according to the 3D shape of the
underlying body. The final position displayed by the
dressed virtual human after the completion of the
simulation process constitutes then a suitable starting point
to apply any animation file, such as motion capture data, in
order to calculate the animation key-frames for the
simulated cloth.
4.3 Virtual Human’s accessories
Since the activities reenacted by the virtual humans in
the selected case studies were requiring the inclusion of
various auxiliary objects in the virtual environments, the
preparation of 3D models of such accessories has been
carried out. Among the prepared additional 3D objects, we
may cite the ones that are manipulated by the virtual
characters, as for instance the amphorae in the
Thermopolium simulation, the musical instruments in the
Aspendos simulation, or the some elements of the outfits
such as the masks of the virtual actors performing in the
virtual Aspendos restitution.
Fig. 16. Roman theatre actor’s outfits [17].
Thus, historical data, such as engravings and frescoes, as
well as iconographic and photographic material (Fig. 16),
such as [17] has been used to support to completion of the
3D polygonal models of the considered accessories. While
the modeling of the basic accessories required an in-depth
research to define a plausible representation of their
restituted appearance, to create the 3D meshes of the
Fig. 17. 3D Roman Musicians with accessories.
The finished 3D models of the instruments, masks and
accessories (Fig. 17), have then been linked to the bodies of
the 3D virtual humans to allow their manipulation and
animation during the real time simulation. These objects
have been connected to the skeleton of the virtual humans,
and their position and orientation have been adjusted
according to their movements. To ensure a maximum
accuracy in the postures and animations reproduced by the
virtual humans while interacting with such accessories,
world scale representations of such elements, featuring
comparable weight and size, were used during the motion
capture sessions to mimic the effect they might exert on the
recorded actions.
5. 3D INTERACTIVE REAL TIME VIRTUAL
VISUALIZATION OF ANCIENT INHABITED PLACES
The case studies described in this paper present the
inclusion of real time animated virtual humans that are reenacting situations and activities that were typically
performed in roman sites during ancient times. The
simulated scenes are composed by different elements, both
visual and non-visual, such as the 3D models of the actors,
the 3D environment, sounds, music, voices and behavioral
rules. In order to assist the creation of such complex
scenarios, which involve different actions and require a
precise control of the simulation events, a scripting
language has been implemented to handle and drive the
interactions between all the simulated elements. Hence, for
each particular scenario, a dedicated system configuration
data file specifying the necessary system parameters, the
required physical environment parameters and the
parameters concerning the VR and AR devices used to
output the visual results, has been prepared to control the
simulations. Moreover, several scripts have equally been
prepared in order to define and time in detail the atomic
behavior of complex elements, such as the actions to be
performed by the animated virtual humans.
5.1 Activities reenacted by the virtual humans in the
selected case studies
Five different scenario based simulations have been
prepared for the two case studies: two scenarios describing
daily life situations in Pompeii, respectively taking place in
the Thermopolium of Vetutius Placidus and in a garden of
a villa located nearby, and three excerpts from Greek
dramas to be reenacted in the virtually restituted Theatre of
Aspendos, including the Agamemnon of Aeschylus, the
Antigone of Sophocles and a choral song from the Iliad.
Although the implemented approach extends to both case
studies, we will detail in this section the scenario prepared
for the Thermopolium site in Pompeii, since it involves
several interactions between many virtual actors at the
same time thus requiring an increased complexity in the
scripts driving the simulation.
Taverns were common and characteristic social
gathering places in the ancient Pompeii. They were public
establishments where hot food and drinks were served, and
where many people used to meet in order to eat, drink, play
or simply talk. As introduced beforehand, the
Thermopolium of Vetutius Placidus, located half a way in
Via dell’Abbondanza, has been chosen for the realization
of an AR real time simulation. The structure of the
considered building is simple and typical of the Roman
period, and is one of the best preserved on the
archeological site, since most of its original furniture was
found during site’s excavation. The preparation of the main
scenario has been entrusted to a professional team of
screenwriters assisted by archaeologists and specialists in
Ancient Pompeian life whose role was to supervise and
guide the process, thus ensuring that the final result would
have been historically valid. Hence, the final implemented
scenario has been structured as taking place inside the
tavern main entrance room, whilst including acoustical
clues and additional background noises suggesting the
presence of several other people in the adjacent rooms.
Secondly, the virtual characters performing the simulation
were defined as acting as they normally would have in a
real tavern and as exhibiting historically consistent outfits
in order to mimic and virtually recreate specific scenes that
can still be observed on the frescos on-site and reproduce
dialogues taken from inscriptions found there.
character modeling and animation phases, several
references to real world examples were employed. To
support the creation of the virtual dresses, for instance, and
to provide tangible physical references for their simulation,
real prototype counterparts were designed, tested and
commissioned to a tailor using the available historical data.
Moreover, to assist the motion capturing sessions, and to
provide the character animators with solid visual references
and clear guidelines to prepare and time the actions
performed by the virtual humans, all the events and
dialogues of the scenario have been filmed and recorded
while being reenacted by professional actors (Fig. 18).
5.2 Animating the Virtual actors
To create the animation files to be applied to the virtual
humans (Fig. 19), a VICON Optical Motion Capture
system based on markers [18] has been used. As previously
introduced, in order to assist such process, several video
references where employed as support for the motion
capturing sessions: in both case studies digitally recorded
video sequences featuring real actors playing the scenarios
to be virtually reproduced have been prepared with the
assistance of the historical advisors.
Then, a projector has been used to screen the real actor’s
performances in order to provide the motion captured
subject with an appropriate reference to time his
movements and synchronize his actions with the recorded
sounds and dialogues. After the completion of a postprocessing phase, the captured movements corresponding
to the various parts of the scenarios to be reenacted have
been converted to separate keyframed animations ready to
be applied to our H-Anim skeleton hierarchy compliant
virtual characters. Since the final animation resulting from
the application of motion captured data generally exhibits a
realistic motion behavior but a limited flexibility, the
different recorded tracks have been connected to each other
to produce complex smooth sequences featuring no
interruption during their execution.
Fig. 19. Motion captured data loaded on top of a Virtual Human 3D model
employing an H-Anim skeleton template.
Fig. 18. Excerpt from the on-site filmed live action reference footage.
In order to succeed the transposition of the scripted
scenario into the virtual simulation and to assist the
To this end, a specialized engine that can blend several
different types of motions, including real time idle motions
and key-frame animations [19], has been implemented as a
service into our in-house VR-AR framework [27] to control
both face and body animation. A large set of blending tools
has been implemented, ranging from time warping, to
splitting and fading operations. The advantage of using
such a generic approach is that once a blending tool has
been defined, it can be used again for any type of
animation, regardless of its structure. In order to be able to
apply and employ the defined blending tools, only a simple
interface is required between the data structure used for
blending, and the original animation structure. An overview
of the blending engine is provided in Fig. 20.
Fig. 20. Overview of the blending engine data structure.
To link specific actions and animation files to the virtual
humans, as well as to allow a transparent access to the
parameters of the blending engine, an XML file has been
prepared for each virtual character loaded into a given
scene. The implemented service also includes an integrated
player that plays and blends in real time all the scheduled
animations in a separate thread for all the humans in the
scene during the 3D simulation [19].
5.3 Integration of sound and music
To further the user’s immersion, and to enhance the
overall believability of the simulated spaces featuring
reenacted scenario-based events performed by animated 3D
virtual humans, the inclusion of sounds, voices and musical
elements constitutes a fundamental contribution. In the
frame of our case studies, different strategies have been
implemented depending on the type, precision and final
scope of the simulated sound sources. In such respect we
can therefore consider as separate cases the voices of the
virtual actors, requiring an accurate synchronization with
their movements, a proper spatial positioning and a suitable
acoustical intelligibility, the background noises produced
outside the direct focus of the simulation, and any
accompanying music serving dramatic or narrative
purposes. Due to the fact that in the presented case studies
the implementation of the background noises and the
accompanying music has been mostly a straightforward
process, therefore not requiring the deployment of any
particular or specific technique, we will focus on the first
case constituted by the rendering of the actor’s voices.
Since the restituted virtual actors shall embody people that
used to live at a specific site during a given historical
period, the choice of the language to be used for the
preparation of the sound files to be played during the final
simulation ought to be pondered. While it might be
considered as an acceptable compromise to actually
simulate the spoken interactions between the different
virtual humans using modern spoken languages, such as for
instance modern English, in the Aspendos simulation it was
otherwise preferred to provide the simulated virtual humans
with English captioned ancient Greek and Latin spoken
lines. Such dialogues and lines have been prepared with the
supervision of historical consultants and have been
recorded in an anechoic chamber while being performed by
real actors. Subsequent to the recording phase, the anechoic
files have undergone an auralization process based on ray
tracing techniques [20]; the geometrical 3D model of the
site, available surface material data and impulse-response
measurements, realized both on-site with a dummy head
and on a scale model to validate the simulation [34], were
then employed to create position dependant acoustical
physical simulations of the sounds sources. In the real time
simulation the generated binaural sound files have been
linked to interactive area triggers, thus allowing the user to
accurately perceive and experience the simulated voices
from specific positions in the restituted virtual site as he
would have on the real one, while preserving his capability
to freely explore in real time the simulated 3D model.
Finally, the prerecorded sounds have been manipulated in a
Python scripting environment to constitute the final
scenario, which details the temporal concatenation of the
sound files, and have been used as reference to create the
facial animation and lip-sync effects applied to the 3D real
time virtual humans. An example of the python based
storytelling scripts is given in [22].
6. RENDERING REAL TIME VIRTUAL CHARACTERS
WITH A ‘KEY-FILL’ ILLUMINATION MODEL
6.1 Real HDR light probe acquisition
One method that can be used to capture a greater
dynamic range in still scenes is to shoot multiple
exposures, which appropriately capture the tonal detail in
dark and bright regions in turn.
Fig. 21. Capturing process of a light probe via a mirrored chrome ball
at different exposures and the resulting HDR light probe and cube map.
These images can then be combined to create a high
dynamic range (HDR) image [21]. Applying this method,
we then use the pixel values from all available photographs
as a spherical area light to construct an accurate map of the
incident radiance in the scene, up to a factor of scale as
defined and visualized via a GPU fragment shader based on
the RGBE file format.
6.2 Dynamic Precomputed Radiance Transfer (PRT)
Our main contribution with respect to previous PRT
methods[19][24][25][26] is in the identification of the
elements of an illumination model that allows for
precomputing the radiance response of a modeled human
surface and dynamically transforms this transfer in the
local deformed frame. Previous basic research efforts from
the areas of computational physics/chemistry in the
calculation of atomic orbitals has uncovered a wealth of
tools and identified Spherical Harmonics as the most
appropriate basis for approximating spherical functions in
the sphere. Finally in the area of AR we extend current
Image Based Lighting techniques, which produced the
most accurate and seamless, yet offline, augmentations, and
substitute the costly, offline Global Illumination Radiosity
part with our real time Dynamic Precomputed Radiance
Transfer Model (dPRT) [19] which is based on the recent
global illumination for real-time precomputed radiance
transfer algorithm by [25] which approximates the
rendering equation in real-time.
6.3 ‘Key-fill’ illumination model
On a cinematographic set, the "key light" is the principal
source of light, while the "fill light" refers to the source that
creates the ambiance lighting: as type of even lighting one
gets in a room where the sun bounces off the walls and
covers almost the whole area. For this reason, the fill light
is usually created with indirect illumination, such as a
spotlight reflected on a white piece of cardboard. Industrial
Light&Magic™ has been relying in gray-chrome spheres in
order to identify the key-fill lights in a real scene and match
them in an ad-hoc manner with the virtual lights lighting
the virtual augmentations.
Following the previous example process of
cinematography, we employ a similar analogy for virtual
humans in augmented reality where we assume that the
reflected radiance at each vertex is a result of a ‘key-fill’
light, which are responsible for local-direct and globalindirect lighting respectively. Thus the Fill light (F) is
approximated via the dynamic PRT method for lowfrequency GI and the Key light (K) by a local direct light
(with an example Phong-Blinn BRDF fr ) and a depth map
V(l) visibility test comparison that allows for high
frequency shadowing. The following final equation [10]
illustrates the final outgoing radiance Lo:
Where the notation index of the above equations is given
in the table below (where prime variables (e.g. x’) are
defined in local space and non prime ones are global (e.g.
x) ):
Notation Index
Incident viewing direction(global)
Incident light direction (global)
Position(global)
Normal (global)
Outgoing radiance
BRDF
Self-shadowing function
dPRT
VH
SH
GI
Hemisphere direction
Intensity of point light ‘j’ / squared
distance ‘r’ of light source from
point x
Diffuse shadowed merged Transfer
[22]
Environment map radiance
Dynamic Precomputed Radiance Tr
ansfer
Virtual human
Spherical harmonics
Global illumination
However such an approximation is not new, as Blinn,
Phong also based their models by partitioning their
reflectance models into ambient, diffuse and specular
components and even [23] followed a similar approach for
real time static scenes. Nonetheless, our combination of
low-frequency dPRT and high-frequency direct lighting
with Depth Shadow mapping has been proven unique [26],
allowing for a simple, robust all frequency lightingshadowing model for real time, skeleton-based deformable
VHs. With our key-fill model (Fig. 22), the VH fixed selfshadowing is given according to the skin-binding pose by
dPRT as well as the response to the area light by rotating
according to each joint world matrix the SH area light.
Hence as it is shown in Fig. 22, one point light is used to
calculate a depth-map and create a sharp shadow where the
rest of the dPRT area light (environment light) to account
for GI.
Consecutively, the sharp, high-frequency shadowing is
provided by the key light(s) according to the first product
term in the equation presented above. Another recent
approach from [24] allows for dynamic low-frequency PRT
via spherical harmonics exponentation, however it is not
clear if this approach could be applied in an AR setup for
virtual character simulation. In our approach we have
utilized GPU vertex shader programs to realize the above
equation for each vertex of the deformed VH mesh,
executed at each frame. In the following figure we are
presenting a comparison between our ‘key - fill’
illumination model and an offline rendering in AR.
We have captured the area light (only bouncing sunlight
coming from an office window) in the real scene and have
projected the HDR environment map in spherical
harmonics according to precomputed radiance transfer [25].
Then, applying our “key-fill” algorithm, the correct
response to the area light has been calculated every frame
for the deformable virtual character. In Fig. 23 we have
been interactively altering both the exposure of the real
camera as well as the one of the virtual camera: in the first
frame both real and virtual camera share the same low
exposure setting, in the middle frame the virtual camera has
higher exposure and in the final third frame both virtual
and real cameras have the same exposure setting.
Fig. 22. Our ‘key-fill’ for multi-segmented virtual actors illumination
model (left) as opposed to offline sky-light ray-tracing (right).
Different video cameras and tracking algorithms were used in the offline
case study.
6.4 AR Exposure matching process
Our simple HDRI Illumination registration algorithm is
based on the following observations:
Although geometrical registration is achieved by
transferring localization information from the real camera
to the virtual, the rich illumination controls such as
exposure, shutter speed and f/stop of the real camera have
rarely been simulated on the virtual camera. This
shortcoming has been established due to the lack of a
common shared space between the real and virtual
illumination models. HDRI images provide an ideal
common illumination space. By extending the photographic
Exposure = Irradiance*Time analogy to the virtual world
establishing the common HDRI Irradiance, we can
establish an analogy of the real-virtual camera exposure
matching. In Fig. 23, we illustrate such an interactive
exposure matching, based on our “key-fill” algorithm.
Fig. 24. Our key-fill illumination model.
In order to ameliorate the lack of high frequency sharp
shadowing projected by the virtual into the real scene, we
thus employ the key-fill illumination model. This algorithm
allows the approximation of the previous dPRT lowfrequency lighting with high-frequency point lights in the
Rendering Equation, which is responsible for sharp
shadowing between virtual and real objects. Although
various PRT algorithms do exist for complete all-frequency
lighting [26] it is has not been clear before how this can be
applied on dynamic, skeleton-based, deformable VHs. Also
as this involves a mobile, wearable AR system, real time
performance and robustness is of crucial importance.
7. REAL TIME IMPLEMENTATION
Fig. 23. Interactive Exposure Matching with Key-Fill.
In order to proceed with the rapid development of such
demanding high performance interactive immersive VR
and AR applications, featuring advanced virtual human
simulation technologies and advanced lighting effects, we
adopted the VHD++ real time framework (overview
depicted in Fig. 25), as described in [27], which addresses
the most common problems related to the development,
extension and integration of heterogeneous simulation
technologies under a single system roof while combining
both framework (complexity curbing) and component
(complexity hiding) based software engineering
methodologies.
Fig. 25. VHD++ MR Framework Overview.
As a result, a large scale architecture and code reuse has
been achieved which has allowed to radically increase the
development efficiency and robustness of the final VR and
AR applications targeting the virtually restituted
archeological sites of Aspendos and Pompeii. The
Aspendos simulation features a complete VR inhabited
interactive reconstruction of the archeological site using as
input devices a standard keyboard and a mouse while
employing a stereo projector and a polarized screen to offer
to the user a 3D perception of the simulated space. While
the Aspendos simulation features and extremely complex
geometry and a huge polygonal model to be animated in
real time, the Pompeii AR application exhibits a simpler
scene which however requires real time markerless camera
tracking [28][29] and perspective registration capabilities
on a mobile setup [2][22] to allow the user to experience
on-site the restituted augmentations of the real space
through an HMD. Finally, in order to manage the
simulation events on both the VR and AR setups, the
implemented software architecture allows the control of the
interactive scenario states through the selective application
of Python scripts at run-time. Such mechanism therefore
offers the ability to interactively and efficiently describe
complex sequences of timed events and articulated
scenarios as simple scripts which can control all the
different elements participating in the real time simulations.
Furthermore, such approach enables the user to modify in
real time any data in use by the current simulation, such as
the behaviors of the digital actors or the position and
animation of the virtual cameras, therefore allowing the
simulation to continue running whilst modifications are
performed.
8. RESULTS
The basic scenegraph OpenGL renderer that has been
implemented
in
our
VHD++
framework
is
OpenScenegraph that has been both successfully integrated
and tested in the ERATO and LIFEPLUS platforms. The
aggressive view frustum culling allows the final scene of
Aspendos site, featuring a model constituted by more than
300.000 polygons, and requiring 1GB of total textures size,
to be rendered on a stereo projector at ~20fps, according to
the view frustum and part of the visible scene within the
virtual camera frustum. Such scene includes the full 3D
model of the Aspendos theatre, featuring radiosity
processed lightmaps applied at material level to simulate
global illumination trough a multi-texturing approach, and
3D real time virtual actors illuminated through PRT
(precomputed radiance transfer) [22]. Regarding the
reconstructed theatre, offline light-mapping rendering
methods have been utilized in order to allow the viewing of
the theatre under the static area light created using the
Aspendos virtual Light probe. The main reason for the
implementation of a static lightmap solution was the fact
that the reconstructed model exhibited a very high
polygonal count and a high number of required texture
memory, thus making any real time algorithm difficult to
apply. Fig. 26 illustrates a final example featuring 3D
Virtual Humans rendered via our extended precomputed
radiance transfer algorithm for illumination registration
[19] in the virtual inhabited world of the Aspendos site,
employing the same area light used to statically illuminate
the mesh of the theatre of Aspendos. Finally the virtual
actors and their clothes were modeled based on the work
described in [30]. The verses are spoken in ancient Greek
and the excerpt has been chosen from the tragedy
“Antigone” of Sophocles. To meet the hardware
requirements for the Pompeii AR simulation, a single P4
3.20 GHz Mobile Workstation with a Geforce FX5600Go
NVIDIA graphics card and an IEEE1394 Unibrain Camera
[31], for fast image acquisition in a video-see-through iglasses SVGAPro [32] monoscopic HMD setup, were used
to create the basic setup for an advanced immersive
simulation.
Fig. 26. Screenshots from the final VR simulation of the Theatre of Aspen
dos featuring real time illumination through PRT and lightmaps (right) vs
a simulation exhibiting standard illumination methods only (left).
Our AR-VR framework applied to the final
Thermopolium AR restitution, featuring 5 fully simulated
virtual humans, 20 smart interactive objects, 1 python
script and 1 occluder geometry, allows the simulation to
run at 20fps for the camera tracker and 17fps for the main
MR simulation in a controlled environment and at 13fps
and 12 fps respectively during the on-site trial. Results are
depicted in Fig. 27.
Fig. 27. AR Setup used on-site for the real time simulation (top), virtual
scene calculated by the computer (bottom left), real environment
augmented with virtual elements as seen through the HMD (bottom right).
The following figure illustrates examples of our ‘keyfill’ illumination model compared with previous methods,
in both AR as well as VR contexts:
VR-AR framework, which is able to manage indistinctly
real time virtual and augmented reality full character
simulations exhibiting body, face and cloth animation.
However, there are still limitations with respect to a)
real-time animation of clothes (not physically correct in
real-time) as well as b) real-time rendering (as dPRT does
not account for sharp shadowing casted e.g. from limbs to
trunk) as well as c) presence of user in the virtual
environment, as the virtual humans do not notice the
presence of real users and thus interact with them in natural
ways. Moreover, there is still a lot of room for
improvement, specifically concerning the illumination
registration method which employs real light on the virtual
scene. Currently, certain issues regarding this particular
point have been addressed with the introduction of High
Dynamic Range Image Based Lighting for virtual character
simulations [22], combined with Dynamic Precomputed
Radiance Transfer, to yield real time results featuring
Global Illumination effects. Nevertheless, mainly due to
current hardware limitations, such method does not yet
allow the simultaneous inclusion in the simulated
environments of more than three fully animated and lighted
virtual characters without causing noticeable performance
drops in the frame rate. Therefore, optimizing the rendering
speed and the overall performance of the simulations,
whilst keeping such complex illumination effects and
extending of the maximum number of high fidelity virtual
humans included in the real time environments, will be one
of our priorities for further development. Secondly, we will
seek to complete the array of the features exhibited by our
simulated virtual humans with the inclusion of real time
hair simulation, dynamic level-of-detail and advanced
interaction capabilities with the user, including voice
recognition and speech synthesis.
10. CONCLUSIONS
Fig. 28. Our real time ‘key-fill’ illumination model (middle, right colum
n) as opposed to standard Phong (left column).
9. FUTURE WORK AND LIMITATIONS
Through the application of the same basic methodology
presented in this article, we were able to prepare all the
necessary elements participating in both our case studies.
Furthermore, for both cases, we succeeded in achieving
real time interaction capabilities with the support of our
As a concluding remark we would like to stress that any
virtual restitution targeting highly complex heritage sites
inherently requires accurate choices for each phase of the
modeling, texturing, lighting and implementation
processes, and special attention must be always used when
the models have to be prepared for a real time simulation
exhibiting the inclusion of animated virtual historical
characters. Furthermore precise and reliable source data is
critical to achieve scientifically correct and accurate
restitutions. Finally, we would like to mention as well that
performing interpretative and comparative studies is also a
necessary step when the intended restitutions are targeting
lost architectural elements of the considered heritage sites.
Therefore, modern 3D visualization technologies could be
employed to perform virtual experimental archeology tests,
thus offering new possibilities to validate, compare and
explore in a 3D environment reconstruction theories and
hypothesis.
ACKNOWLEDGEMENTS
The presented work has been supported by the Swiss
Federal Office for Education and Science in frame of the
EU INCO-MED ERATO project and in frame of the EU
FP5 IST LIFEPLUS project. The authors would like to
thank Arjan Egges for its work on the idle motion service
and animation blending engine. Moreover, the authors
would like to thank as well Christiane Luible and Marlène
Arevalo-Poizat for their design work on the animated
virtual humans.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
ERATO Project, http://www.at.oersted.dtu.dk/~erato/index.htm.
LIFEPLUS Project, http://www.miralab.unige.ch/subpages/lifeplus/
Virtual Heritage Network 2004, http://www.virtualheritage.net.
Y. Kim,T. Kesavadas, S.M. Paley and D.H. Sanders, “Real-time
Animation of King Ashur-nasir-pal II (883-859 BC) in the Virtual
Recreated Northwest Palace”, VSMM2001 Proceedings, IEE
Computer Society, Los Alamitos, pp. 128-136, 2001.
V. De Leon, and R. Berry, “Bringing VR to the Desktop: Are you
Game?”. In IEEE Multimedia, April-June, pp. 68-72, 2000.
R. M. Levy, “Temple Site at Phimai: Modelling for the Scholar and
the Tourist”. In VSMM2001 Proceedings, IEE Computer Society,
Los Alamitos, pp. 1247-158, 2001.
G. Papagiannakis, G. L’Hoste, A. Foni and N. Magnenat-Thalmann,
“Real-Time Photo Realistic Simulation of Complex Heritage
Edifices”. In VSMM2001 Proceedings, pp. 218-227, 2001.
A. Foni, G. Papagiannakis, N. Magnenat-Thalmann, “Virtual Hagia
Sophia: Restitution, Visualization and Virtual Life Simulation”.
Presented at the UNESCO World Heritage Congress, 2002.
K. Lanckoronski, “Städte Pamphyliens Und Pisidiens II“. Prag,
Wien, Leipzig, 1892.
C. F. M. Texier, “Asie Mineure“. Paris, 1862.
G. Izenour, “Roofed Theatres in Classical Antiquity“, Yale
University Press, 1992.
De Bernardi, “Teatri in Asia Minore III“, 1970.
H. Seo and N. Magnenat-Thalmann, “An Automatic Modeling of
Human Bodies from Sizing Parameters“. In ACM SIGGRAPH,
Symposium on Interactive 3D Graphics, pp19-26, pp234, 2003.
DIGIMATION, Bones Pro 2, Avilable from
http://www.digimation.com.
P. Volino and N. Magnenat-Thalmann, “Implicit Midpoint
Integration and Adaptive Damping for Efficient Cloth Simulation“.
In proceedings of Computer Animation and Virtual Worlds, Wiley,
16(3-4), pp 163-175, 2005.
P. Volino and N. Magnenat-Thalmann, “Resolving Surface
Collisions through Intersection Contour Minimization“. In ACM
Transactions on Graphics (Siggraph 2006 proceedings), 25(3), pp
1154-1159, 2006.
P. Monaghan, “The Masks of Pseudolus“. In Didaskalia Volume 5
No. 1, University of Warwick, edited by H. Denard and C. W.
Marshall / ISSN 1321-4853, 2001.
Oxford Metrics, Real-Time Vicon8 Rt, 2000. Retrieved 10-04-01
from: MotionAnalysis, Real-Time HiRES 3D Motion Capture
System, 2000. Retrieved 17-04-01 from:
http://www.motionanalysis.com/applications/movement/research/rea
l3d.hml.
A. Egges, G. Papagiannakis and N. Magnenat-Thalmann, “An
interactive mixed-reality framework for virtual humans“,
Cyberworlds 2006, EPFL, Switzerland, November 28-29, IEEE
Computer Society, 2006.
M. Lisa, J. H. Rindel and C. L. Christensen, “Predicting the
Acoustics of Ancient Open-Air Theatres: The Importance of
Calculation Methods and Geometrical Details“. In Proceedings of
the Joint Baltic-Nordic Acoustics Meeting 2004, Mariehamn, Åland,
2004.
[21] P. Debevec, “Rendering Synthetic Objects into Real Scenes:
Bridging Traditional and Image-based Graphics with Global
Illumination and High Dynamic Range Photography”, Proc. of ACM
SIGGRAPH98, ACM Press, 1998.
[22] G. Papagiannakis, S. Schertenleib, B. O’Kennedy, M. Poizat, N.
Magnenat-Thalmann, A. Stoddart and D. Thalmann, “Mixing Virtual
and Real scenes in the site of ancient Pompeii”, Journal of Computer
Animation and Virtual Worlds, pp. 11-24, vol. 16, issue 1, Wiley
Publishers, February 2005.
[23] L. Wang, W. Wang, J. Dorsey, X. Yang, B. Guo and H. Shum,
“Real-Time Rendering of Plant Leaves”, Proc. of ACM
SIGGRAPH05, 2005.
[24] Z. Ren, R. Wang, J. Snyder, K. Zhou, X. Liu, B. Sun, P. P. Sloan, H.
Bao, Q. Peng and B. Guo, “Real-time Soft Shadows in Dynamic
Scenes using Spherical Harmonic Exponentiation”, Proc. of ACM
SIGGRAPH06, 2006.
[25] P.P. Sloan, , J. Kautz, and J. Snyder, “Precomputed Radiance
Transfer for Real-Time Rendering in Dynamic, Low-Frequency
Lighting Environments”, Proc. of ACM SIGGRAPH02, pp. 527-536,
ACM Press, 2002.
[26] J. Kautz, J. Lehtinen and P.P. Sloan, “Precomputed Radiance
Transfer: Theory and Practice”, ACM SIGGRAPH 2005 Course
Notes, 2005.
[27] M. Ponder, G. Papagiannakis, T. Molet, N. Magnenat-Thalmann and
D. Thalmann, “VHD++ Development Framework: Towards
Extendible, Component Based VR/AR Simulation Engine Featuring
Advanced Virtual Character Technologies”. In proceedings of
Computer Graphics International (CGI, IEEE Computer Society
Press, pp. 96-104, 2003.
[28] M. Lourakis, “Efficient 3D Camera Matchmoving Using Markerless,
Segmentation-Free Plane Tracking”. In Technical Report 324,
FORTH Inst. of Computer Science, 2003.
[29] 2D3, creator of the Boujou Software Tracker: http://www.2d3.com/
[30] N. Magnenat-Thalmann, F. Cordier, H. Seo and G. Papagiannakis,
“Modeling of Bodies and Clothes for Virtual Environments”. In
Proceedings of Cyberworlds04, pp. 201 – 208, IEEE Computer
Society, 2004.
[31] Unibrain, firewire camera, http://www.unibrain.com
[32] I-Glasses HMD, http://www.i-glassesstore.com/
[33] http://www.planetside.co.uk/terragen/
[34] N. Prodi and R. Pompoli, “Guidelines for acoustical measurements
inside historical opera houses: procedures and validation”. In
Journal of Sound and Vibration, Vol. 232-1, 2000.
Nadia Magnenat-Thalmann has pioneered
research into virtual humans over the last 20
years. She studied at the University of Geneva
and obtained several bachelor degrees
including Psychology, Biology, and Computer
Science. She received a MSc in Biochemistry
in l972 and a PhD in Quantum Physics in 1977.
From 1977 to 1989, she was a Professor at the
University of Montreal in Canada. In l989, she
founded MIRALab, an interdisciplinary
creative research laboratory at the University of Geneva. She may be
contacted at MIRALab, CUI, University of Geneva, 24, Rue General
Dufour, CH-1211, Geneve 4, SWITZERLAND.
Email : [email protected]
Alessandro E. Foni is a research assistant and
a 3D designer at MIRALab, University of
Geneva. He holds degrees in Political Sciences
and Information Systems from the University
of Geneva. In the field of Computer Sciences
his main interests are directed towards database
conception, 3D animation and modeling
techniques, Virtual Cultural Heritage and
architectural design.
Email : [email protected]
Georgios Papagiannakis is a computer
scientist and senior researcher at MIRALab,
University
of
Geneva.
He obtained his PhD in Computer Science at
the University of Geneva, his B.Eng. (Hons) in
Computer Systems Engineering, at the
University of Manchester Institute of Science
and Technology (UMIST) and his M.Sc (Hons)
in Advanced Computing, at the University of
Bristol. His research interests are mostly
confined in the areas of mixed reality illumination models, real-time
rendering, virtual cultural heritage and programmable graphics.
Email: [email protected]
Nedjma Cadi-Yazli is a research assistant and
a 3D designer at MIRALab, University of
Geneva. She was graduated in School of
Architecture, University of Geneva in 1998.
Her main interest in the field of Computer
Sciences are directed towards 3D modeling and
animation of virtual humans, Cultural Heritage
and architectural design.
Email: [email protected]