Documenting a Virtual World - A Case Study in Preserving Scenes

Documenting a Virtual World - A Case Study in Preserving
Scenes from Second Life
Mircea-Dan Antonescu
Mark Guttenbrunner
Andreas Rauber
Vienna University of
Technology
Vienna, Austria
Vienna University of
Technology
Vienna, Austria
Vienna University of
Technology
Vienna, Austria
e9627289@student.
tuwien.ac.at
guttenbrunner@ifs.
tuwien.ac.at
ABSTRACT
In addition to the World Wide Web, other platforms have
emerged on the Internet that are being used by communities
to provide and exchange information. One family of such
platforms are virtual worlds, which - like the Web - offer
themselves for information exchange, e-commerce transactions and social interaction. While web data is successfully
being collected and archived, the challenges met when trying to archive aspects from virtual worlds are significant.
This paper describes an approach to archive information
from virtual worlds, presenting a case study based on Second Life. We present a sociologically based alternative to
the more technically-minded solutions aiming at preserving
the very artifacts existing within virtual worlds. Our solution is based on creating an extension to the user viewport
that will be used to record visual information, either in captured still images or movies, the same way a cinematographer would record a documentary movie. Using a combination of scripting commands within Second Life and third
party image recording tools, we can give visual coverage of
specified areas of interest within Second Life.
Keywords
Second Life, digital preservation, virtual worlds, game preservation, web archiving
1.
INTRODUCTION
Beginning with the early days of the Internet, since before the World Wide Web even, virtual worlds and usergenerated content have been part of both the future vision
and the technological development surrounding it. Visualization through modern rendering engines has made virtual
worlds both accessible and popular. Names like “World of
Warcraft” or “Second Life” have become synonymous with
this form of entertainment. This places them along a continuum of interaction forms on the Internet, ranging from
static Web pages via social web applications and interactive
rauber@ifs.
tuwien.ac.at
streaming portals to on-line games and dedicated virtual
worlds running within a closed environment, but potentially
cross-linking to other aspects of the Web. Given the participative nature of these environments, the amount of information that is created there is directly proportional to the
number of participants and the ultimate goal of the world.
To give a rough estimate of the amount of potential content
created in Second Life (SL), the Q1 2009 statistical figures
published on a regular basis by Linden Labs, the company
operating SL, exhibit a total of 124 million user hours spent
in SL during the first quarter of 2009, together with a peak
user number of 88.000 [5]. Both these figures show growth
compared to the respective numbers during the first quarter
of 2008, so both the community interest and the challenges
to archival approaches are still growing.
The information we observe in virtual worlds is, also by nature of the Internet, short-lived: it often ceases to exist due
to such trivial reasons as users canceling accounts, moving
on and so on, or simply fades out due to being deemed outof-fashion - a scenario very similar to the challenges met in
archiving data from the Web. Multiple proposals for archiving strategies regarding virtual worlds exist already, most of
them based on a technical understanding of protocols underlying the virtual worlds and information extraction. While
this may preserve the content, i.e. the technical infrastructure, the objects and avatars existing in the virtual worlds,
they fail to grasp another crucial aspect of virtual worlds,
namely the type and means of interaction. Unlike static
pages, virtual worlds in many aspects are more like social
games, where the interactive nature and the actual (type of)
interaction going on at a specific period of time, is as important as the actual world itself. Traditional web archiving approaches, as well as standard approaches to digital
preservation, such as migration and emulation, thus cannot
be used to conserve all aspects of virtual worlds such as Second Life. The dynamic and transient amount of information
is very high and should not be excluded when attempting
to preserve the essence of what made these worlds unique.
A method that would catalog only the topology and static
elements of a virtual world, lacking records of user activities,
is missing characteristic elements of day-to-day life in these
worlds, where the interactive aspect has a higher priority
than in the conventional Web. In the same manner that a
contemporary witness will also leave records of people and
their activities as part of their report, our information gathering process needs to snapshot users and their interactions.
This paper thus proposes a complementary approach to the
challenges of archiving virtual worlds, one that focuses on
a natural documentation technique to produce information
that is similar to modern TV and picture documentary products. We propose a solution which, in the particular case of
Second Life, allows us to gather information the same way
regular users perceive the world around them. We create
and manipulate an object within the engine of SL to have it
move through the virtual world and detect points of interest
based on user activity and we perform a visual recording of
such points of interest.
The remainder of this document is structured as follows:
Section 2 gives an overview of related work that we found
relevant to our experiment or used as a starting point for
further research. Section 3 describes how we perform our
experiment including the insights that we gathered while
encountering various caveats. Section 4 delves deeper into
the ethical implications of our work and finally, Section 5
proposes a few possible continuations and gives an outlook
on future work.
2.
RELATED WORK
Following in the footsteps of an increasing number of digital
preservation initiatives focused on other media than the classic web page, there are various approaches to the preservation topics that influenced and motivated our work. Several
approaches exist to deal with user generated video content,
like the one presented in [8], based on the automated generation of metadata and tagging systems paired with video
feature extraction. With the availability of these and more
advanced types of analysis of video content, a video recording of interaction may serve as a suitable surrogate for preserving complex interactive digital content. This applies to
session filming, a viable approach to capture the interactive
nature of web pages. It has also been evaluated as a viable alternative to preserve interactive content such as video
games [4].
The “Virtual World Initiative” [3] aims at constructing a
dedicated Second Life experimentation environment for both
archivists and digital presentation students. Also in focus
are issues such as long-term preservation aspects, data architecture and portability, ethical aspects and, last but not
least, providing an in-game environment inside SL which is
encouraging to digital preservers. Our approach constitutes
in some sense a diversification and continuation of some of
the topics outlined there.
A number of projects have been started addressing the preservation of virtual worlds. One valuable source of documentation about past activity in virtual worlds is the Internet
Archive [2], which contains significant collections of video
footage based on voluntary contribution of manually gathered footage. This, of course, does not scale well and requires
much attention, but offers a higher degree of accuracy and
content filtering than an automated approach. The same
duality can be observed in standard web archiving initiatives, where we also find both the manual collection as well
as the automated harvesting approach complementing each
other, with priority being given according to an institutions
mission statement. Examples of the footage stored in the
Internet Archive include moments of cataclysmic impact in
Figure 1: Recording camera, integrated with the
Second Life HUD
the existence of a select virtual world, often events related to
closure or end. A proposal for automatically filming scenes
in SecondLife, which is similar to the approach presented in
this paper, was presented by Mohr [6]. Academic projects
have also been started in collaboration with Linden Labs,
aiming at a standardized and expandable process in virtual
world archiving. An important deliverable of the “Preserving Virtual Worlds” project [1] is the definition of a metadata
standard aimed specifically at archiving virtual worlds, allowing for the creation of a specialized ontology which would
greatly help with accurate representation of preserved information, especially with regard to later evaluation and querying.
Ethical and legal implications need to be considered in all
endeavors of collecting user-generated content. This may
seem even more important when using a kind of “filming”
approach, which is probably due to the wide-spread use and
resulting concerns that video surveillance causes in the real
world, even though that metaphor may have an entirely different function in virtual settings. Still, concerns similar
to those voiced for web archives [7] apply to other (media)
archives as well. The same problems regarding privacy, confidentiality and legality of the archival effort need to be addressed, and we feel that future work on the topics of optout means for users and economization of digitally preserved
material need to be addressed. There are also many spontaneous discussions relating to this topic on popular web sites,
usually sparked by controversial issues, like for example privacy concerns that archiving a virtual world would raise [9].
3.
PROTOTYPE IMPLEMENTATION
In order to complement approaches focusing at data preservation, i.e. preserving the virtual world and its artifacts,
our approach aims at preserving the interactive nature of
a virtual world, capturing its rendering characteristics and
the interaction and animation going on, very much in the
spirit of [6]. To this end, we basically rely on the viewing
engine of a virtual world as the starting point, recording all
interaction as perceived by a user. Recording subsequently
can take the form of image snapshots or include full video
coverage including sound. The experiments reported below
are based on Second Life as one of the most prominent virtual worlds that is used for a range of different purposes,
including pure entertainment, education, collaboration and
electronic commerce.
All our experiments are done using the regular SL Viewer
and are thus subject to the restrictions implemented there.
Since Linden Labs made the viewer available under the GNU
GPL license, future work on this subject may benefit from
modifying the source code rather than work around known
limitations as we have done. Yet, not modifying and not
depending on internal structure of the viewers leaves the
approach more generic, more stable and more easily portable
to other worlds.
Figure 2: A sensor array set-up for coverage of a
Second Life sim
The original goal of the experiment was to build a camera drone inside the Second Life world, using the tools for
content creation provided by the regular SL Viewer, and to
automate the drone to traverse the world and record still
images or video footage while it does so. Technical limitations forced some changes to this general goal, as described
in further detail below, yet the basic principle still applies.
We start by creating a ’prim’ - a term for any basic object
within SL - and experimenting with scripting its behavior
via the proprietary Linden Scripting Language (LSL). Every object in Second Life can have an LSL script associated
with it. The LSL program model is based on an extensible event model, and we aim to incorporate our own control
logic in the LSL script associated with our prim. Originally,
we had intended to have the camera object move independently from the avatar of the user logged in to perform the
archival operation. However camera control is only possible
for objects attached to an avatar or having an avatar sit on
them (i.e. cars moving around), and thus not suitable for
our requirement.
We thus devise a solution relying on integrating a scripted
camera object in the SL Viewer (“HUD” in the SL terminology), merging it with the viewport of the current user,
which allows the camera object to take control of the viewport and manoeuvre it around. Using this approach, we
are able to remote control our camera object in the virtual
world. While we move the camera, the avatar which is being used to attach the camera to will remain stationary. To
other participants in second life, the camera object will be
invisible.
The two main reasons we decided on this seemingly complicated solution as opposed to simply recording an avatar
walking around with manual control are the need for remote manipulation of the camera on one hand - to provide
automated archival capabilities that scale better than any
manual method would, especially with scenarios over longer
periods of time - and the non-intrusive nature of the virtual camera. The camera will be capturing SL content the
same way a user avatar standing at the current viewport
(not avatar) location would perceive it.
It is important to note that our solution, as it is based on
scripting within the SL engine, requires scripting to be enabled in the respective game area. Since land owners in SL
have the option to disable prim scripting ability for their
areas - usually done in order to prevent abuse, irritation or
harassment by malicious users - or “griefing”1 , as the Internet slang term labels these kinds of actions, this is something
1
http://en.wikipedia.org/wiki/Griefing
that needs to be determined before setting the archival targets.
Figure 1 illustrates how our system captures SL visuals. The
only difference to a regular SL viewport is the red circle
in the upper right corner, which is the placeholder object
that we scripted and anchored to the HUD. This way, the
information that will be recorded will be as close to what a
SL user would be perceiving in a first-person perspective as
possible. There will be no other visible clue of the camera
operator, more specifically no avatar.
With recording capabilities in place, we have to solve the
path finding and navigation issue within the SL environment. To avoid complex coverage planning, we decided on
creating the camera object as non-physical, which allowed
us to move it through obstacles and avoid having to implement collision detection algorithms. Our first attempts
were hindered by the fact that camera control reverts to the
user avatar when script repositioning is used, thus rendering
footage obtained that way very jerky and unusable. While
there are quite a few development requests to the SL makers
to incorporate camera control and attachment to non-avatar
objects, and some third-party viewers for SL already support
this feature, we take a different approach: We turn our camera into a so-called “HUD attachment”, a special mechanic
which allows the SL Viewer to integrate scripted objects
in the visual display of the avatar. Originally intended for
game play elements like radars or life gauges, it allows us to
take control of the user viewport and manoeuvre it around
like a floating camera, without the need to move the avatar.
With virtual worlds being huge in terms of space to cover we
want to ensure that recording of interaction is concentrated
on those areas where actual interaction is happening. We
thus need to identify areas where there is some user activity, referred to as “hotspots” in our system. Hotspot activity
detection is achieved by using the sensor functions from LSL
and having them set up to scan for avatar/user instances.
If our prim detects more than 10 avatars within a 10 meter
radius, we classify it as a hotspot and slow down movement
speed in order to record more images. When there are no
avatars within 10 meters of the camera we warp the camera
a fixed distance ahead. Our sensor data suffers from one
fundamental flaw, though: since it is incorporated in an object attached to the avatar HUD, its scan is based at the
avatar position. For future work, we propose to use an array of fixed sensors positioned in a grid across the SL area
that shall be archived. The sensors will be positioned inside
phantom objects that the SL viewer will not render, thus
being invisible to the users, and will continuously monitor
user activity around them. The sensors will then communicate their findings to the HUD attached camera, and the
camera will evaluate the results and navigate towards the
one hotspot that it computes as the most interesting based
on metrics such as avatar numbers in the proximity. Using
such a phantom sensor array requires set-up in advance and
visible information to the area visitors about the archival
project, of course. Assuming we want to cover a regular
SL ’sim’, which is a land area of 65.536m2 contained within
a square with a 256m edge, an exemplary archiving setup
could look like the one depicted in Figure 2.
SL does not allow screenshot or video capture via the builtin scripting language. For that purpose, we use a third-party
program, in our case FRAPS2 , a utility for video and image
capture. FRAPS can be configured to take screenshots at
specific intervals. Alternatively, video footage can be produced by recording the respective viewer output on screen.
Again, combinatory approaches can be chosen, or more advanced, adaptive settings can be constructed, flexibly configuring and automatically adapting recording behavior similar
to the strategies employed in continuous web archiving.
With the current static settings, the amount of data produced scales linearly with recording time. A couple experimental recordings in uncompressed .avi format at a resolution of 1680x1050 resulted in storage demands of about
1-1.3GB per minute of video stream at 30FPS. Compressing
the video with the XVid codec yields a very good result,
resulting in about 60MB per minute of video footage. Using
XVid compression, one hour of video footage will amount to
3.6GB. Video capture presents the advantage of including
ambient sounds and animations over still images.
Basing the archival strategy on screenshots instead requires
about 150-160KB of storage per JPG image at settings of
1680x1050 resolution and 32-bit color depth with no compression. Assuming the recurring screenshot interval is set
to 10 seconds, that would amount to roughly 9MB of screenshots per hour. Furthermore, given the nature of the graphics captured, significant levels of compression can be achieved
as lossless image storage is probably not required for this
type of data.
4.
LEGAL ASPECTS AND ETHICS
When recording image data we need to consider both ethical and legal aspects of recording content and user-generated
content within SL. According to the SL Terms of Service3 ,
Section 1.3: Content available in the Service may be provided by users of the Service, rather than by Linden Lab.
Linden Lab and other parties have rights in their respective
content, which you agree to respect.[...] You acknowledge
that: (i) by using the Service you may have access to graph2
3
http://www.fraps.com
http://secondlife.com/corporate/tos.php
ics, sound effects, music, video, audio, computer programs,
animation, text and other creative output (collectively, “Content”). From a legal point of view, using a regular copyright
disclaimer stating that the copyrights are owned by their respective owners should be sufficient, given that the content
is archived exclusively for non-commercial and non-profit
utilization.
The ethical aspect turns out to be more complicated. The
Linden Labs Privacy Statement states that: You may choose
to disclose personal information in our online forums, via
your Second Life profile, directly to other users in chat or
otherwise while using the Second Life service. Please be
aware that such information is public information and you
should not expect privacy or confidentiality in these settings.
This, in principle, renders all interaction and information
provided via Second Life public, which may be interpreted
as allowing others to monitor and collect information. In
spite of this, the expected user attitude is often prominently
and vocal against disclosure of chat logging and personal information outside of the SL world for reasons that are similar to those discussed in [7] for the Web in general. There
is even a Community Standards policy by Linden Labs that
states: Remotely monitoring conversations, posting conversation logs, or sharing conversation logs without consent are
all prohibited in Second Life and on the Second Life Forums.
Even though the latter has held little legal weight in the past
when it came to usage of SL chat logs acquired without the
consent of the participant users, it still illustrates an aspect that should be examined thoroughly before deploying
archival systems for Second Life and other virtual worlds.
The legal and ethic research has been conducted without
professional legal counsel and applying the current legislation valid in Austria. We are well aware that, should archival
scenarios like the one we propose become deployed, further
research in this area is required.
5.
CONCLUSIONS AND FUTURE WORK
This paper proposed an alternative approach to preserving virtual worlds, focusing on interaction rather than the
data structures and objects themselves. Similar to current
archival recordings of virtual worlds produced manually, and
reflecting documentary approaches in the social sciences,
video footage is produced automatically by an avatar moving
through a virtual world, focusing on areas with higher activity. Recording can be done either based on video footage
or with sequences of image snapshots.
We find the method viable as a prototype, being however
tied to the fundamental restriction of requiring the “run
scripts” permission within the game world. During initial
studies we found a large number of areas where scripting
was prohibited. Given the implications for the privacy and
intimacy of the involved users, and taking into account the
restrictions we encountered, we propose our method as the
method of choice for obtaining automated coverage of events
of interest, in the same way a camera crew would record
those events in the real world. The main benefit of our
method lies in the automated operation once set-up is complete. It further supports monitoring heuristics with differing archival priority settings if a proper sensor network is
set-up.
Both video and image capture can be enhanced by automatically combining it with a form of geotagging, i.e. by
adding information regarding the location where the information was acquired to the image/movie data. This can be
achieved both via metatags or naming conventions. Since
filming/screen-shooting has to be done via third-party utilities when using the official SL viewer, tagging has to be
performed by the capturing application, at least in a basic
form that allows for post processing. A possible approach
is to devise a controller application which feeds data to the
SL viewer in the form of typed commands, the same way a
human player would add comments.
There is also the perspective of using a specially compiled
version of the Second Life client and adjust it to archival
needs, given that the source code is available under the GNU
GPL license. Such an approach would solve many of the inconveniences we found while trying to work within the current official version, which has obviously not been targeted
to archival needs. Most notably, this would allow direct processing of SL environment data and enhancements to avatar
or vehicle control.
Another issue that needs to be considered when deploying
such archiving activities is post-processing of information
gathered, with a focus on privacy and security issues. Tasks
include user anonymization, conversation and name filtering,
to name just a few. On the other hand, post-processing can
also be used to mine structural information about the data
gathered and make it available in the form of semantic maps.
6.
REFERENCES
[1] Preserving Virtual Worlds Project. Available from:
http://pvw.illinois.edu/pvw/.
[2] The Internet Archive. Available from:
http://www.archive.org/details/virtual_worlds.
[3] E. A. Fox, S. J. Lee, J. Velasco-Martin, G. Marchionini,
S. Yang, and U. Murthy. Curatorial Work and Learning
in Virtual Environments: A Virtual World Project to
Support the NDIIPP Community. In First
International Workshop on Innovation in Digital
Preservation (InDP), 2009.
[4] M. Guttenbrunner, C. Becker, A. Rauber, and
C. Kehrberg. Evaluating strategies for the preservation
of console video games. pages 115–121, London, UK,
2008.
[5] L. Labs. The Second Life Economy - First Quarter 2009
in Detail [online]. 2009. Available from:
https://blogs.secondlife.com/community/
features/blog/2009/04/16/
the-second-life-economy--first-quarter-2009-in-detail.
[6] G. Mohr. Challenges on the horizon. Technical report,
Presentation given at Innovative Ideas Forum
(IIF2008), Canberra, Australia, 2008.
[7] A. Rauber, M. Kaiser, and B. Wachter. Ethical Issues
in Web Archive Creation and Usage - Towards a
Research Agenda. In 8th International Web Archiving
Workshop (IWAW08), 2008.
[8] C. Shah and G. Marchionini. Preserving 2008 US
Presidential Election Videos. In 7th International Web
Archiving Workshop (IWAW07), 2007.
[9] M. Wilson. Kotaku: Could Preserving Second Life
Create Big Brother?, 2007. Available from:
http://kotaku.com/gaming/virtual-worlds/
could-preserving-second-life-create-big-brother-313986.
php.