Sonic perceptual crossings: a tic-tac

Sonic Perceptual Crossings: A tic-tac-toe Audio Game
Andreas Floros
Nicolas – Alexander Tatlas
Stylianos Potirakis
Dept. of Audiovisual Arts
Ionian University
49100 Corfu, Greece
+30 26610 87725
Dept. of Electronics
TEI of Piraeus
GR-12244 Aigaleo, Greece
+30 210 5381513
Dept. of Electronics
TEI of Piraeus
GR-12244 Aigaleo, Greece
+30 210 5381550
[email protected]
[email protected]
[email protected]
the users with the existing, alternative game categories, the gameplayers tend to call them video-games, a fact that originates from
the strong visual component that is employed to convey the game
virtual atmosphere and support the overall game-play. Sound
represents a secondary component of a typical computer game,
aiming mainly to enhance the ambient environment and to
represent non-essential for the game-scenario evolution
information used for supporting user immersion.
ABSTRACT
The development of audio-only computer games imposes a
number of challenges for the sound designer, as well as for the
human machine interface design approach. Modern sonification
methods can be used for effective data and game-environment or
conditions representation through sound, including earcons and
auditory icons. In this work we take advantage of earcons
fundamental characteristics, such as spatialization usually
employed for concurrent/parallel reproduction, in order to
implement a tic-tac-toe audio game prototype. The proposed sonic
design is transparently integrated with a novel user control /
interaction mechanism that can be easily implemented in state-ofthe-art mobile devices incorporating movement sensors (i.e.
accelerometers and gyroscope). The overall prototype design
efficiency is assessed in terms of the employed sonification
accuracy, while the playability achieved through the integration of
the sonic design and the employed auditory user interface is
assessed in real game-play conditions.
Due to the essential involvement of the visual component, playing
a video-game is nearly impossible for specific user target groups,
such as visually impaired people. Moreover, there is a number of
game applications targeted to non-entertainment scopes, such as
serious games [1] used for educational purposes in many fields. In
these application environments, sound represents a prominent
component for realizing the required human-machine interaction
interfaces. The exclusive employment of sound as a means for
realizing game interfaces has led to the development of a new
type of games: the audio games [2]. Audio-games are computer
game applications that employ appropriately synthesized auditory
displays for developing the game-play scenario and establishing
the user-computer interaction. Thus, eye-free information
communication can be achieved. Clearly, sonic design is a key
aspect for developing perceptually efficient auditory interfaces for
audio-games. Towards this aim, many existing and on-going
research studies [3] – [5] focus on investigating and assessing the
potential fundamental guidelines that must be followed for
effective sonification strategies. Sonification is an alternative
means for representing data through the auditory channel and has
been employed in a wide range of applications, such as data
analysis and representation [6] – [7] and drawing [8].
Categories and Subject Descriptors
H.5.2. [User interfaces]: User interfaces – auditory, theory and
methods, user-centred design. H.5.5. [Sound and Music
Computing]: Methodologies and Techniques – auditory, binaural.
1.2.1. [Applications and Expert systems]: Games – audio, usercentred design
General Terms
Algorithms, Design, Human Factors.
Keywords
The large variety of sonification techniques has allowed the
development of multiple audio game genres. We will here
discriminate two general categories: a) audio games evolved
based on existing (video) game scenarios and b) audio games
scenarios developed from the scratch, targeted to be realized using
auditory displays only. For example, focusing on the second
category, a typical representative is iSpooks [9], an audio based
adventure game available for iOS platforms. On the other hand,
the first audio game category is also of great interest, since it
contains game titles derived from the adaptation of well-known
(video) game scenarios. In these cases, scenario adaptation is a
process that must take into account the narrative distinctiveness of
an auditory environment, a task that has been also applied for
audio films production [10].
Audio games, earcons, sonic interaction design, eye-free
interaction
1. INTRODUCTION
Computer games represent a significant means of everyday
entertainment using electronic equipment. Despite their large
commercial / market share and the established familiarization of
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
AM ‘11, September 7–9, 2011, Coimbra, Portugal.
Copyright © 2011 ACM 978-1-4503-1081-9…$10.00.
Grid-based games like sudoku, chess and noughts and crosses (or
tic-tac-toe) represent s special case of legacy game-titles that can
be converted to audio games, since the game-play can be
88
in computer operating system environments. An auditory icon
relies on everyday, recognizable sounds, providing an accurate,
context-aware sonification mapping. On the other hand, earcons
[17] are synthesized by combining “fundamental” building sonic
motives created using variable sound parameter values (i.e.
rhythm, pitch, timbre e.tc). This construction approach allows the
representation of concurrent (or parallel) earcons, provided that
specific design rules will be applied [18]. Since earcons do not
relate to their referent information in terms of the targeted context,
user training is required in order to render them recognizable.
Apart from auditory icons and earcons, there are many alternative
sonification techniques (such as parameter mapping, model-based
and musical sonification e.tc); however, any reference to them is
out of the scope of the current work.
algorithmically described, allowing the employment of a sound
design mechanism that takes into account the applicable
deterministic game rules. Additionally, these rules are usually preknown to the game – players, eliminating any requirements for
describing the game-scenario details through audio-means only.
For example, a recent study [11] investigated the concept of
interactive sonification of grid-based games and applied rhythmic
sonification approach for implementing an alternative to sudoku
as a case study.
Under these considerations, in this work we focus on the design
and realization of the tic-tac-toe game using a novel auditory and
gestural interface combination, suitable for execution in mobile
devices and platforms. For the demonstrating and evaluative
purposes of this work, the game prototype was developed using
the Arduino [12] hardware platform; however, the application
architecture can be easily ported to any available mobile operating
system. In the literature, one can find a previous realization of a
tic-tac-toe audio-only environment [13]; but this work mainly
considered playability issues, while the results obtained also
included the possible impact of playing relative audio games to
associated memory and concentration skills. Within the scope of
this work we attempt to focus on sonic design issues and in
particular we investigate the performance of an advanced earcons
scheme for optimized playability performance. Moreover, the
complete audio game application incorporates a gestural /
movement-tracking user interface that handles all the user
movements and allows a completely eye-free game
implementation.
Earcons’ concurrency can be more efficient provided that spatial
characteristics in three dimensions (3D) are incorporated. This
approach takes advantage of the human ability to localize a
specific sound from a set of concurrently reproduced audio
signals, with all the sound sources placed in discrete and different
places in the auditory environment. Binaural technology [19] is
frequently used for sound source spatialization, due to the simple
reproduction setup imposed by the limited (equal to two) number
of audio channels required. In audio-games environments,
binaural rendering techniques can be also employed for
supporting user immersion in 3D. For example, recently, the
concept of augmented reality audio (ARA) was employed for
developing an audio game [20] with single-player complex
interaction scenario. In [21], the same ARA scenario was
extended to multiple, concurrent user participation, introducing
more complex interaction paths exclusively implemented through
a binaural auditory display.
The rest of the paper is organized as following: Section 2 provides
an overview of the audio games evolution under the perspective of
their particular design principles. A detailed analysis of the
proposed tic-tac-toe audio game implementation is presented in
Section 3, focusing mainly on the sonic design process employed,
as well as the eye-free gestural user interface realized. Section 4
includes the results obtained through a sequence of subjective
tests organized during the demonstration of the developed audio
game application and assesses a number of parameters and issues
related to playability as a result of the applied sonic design.
Finally, Section 5 concludes this work and identifies potential
enhancements that may be applied for optimizing the measured
auditory perceptual efficiency.
Game audio is also considered as an attractive means for
entertainment on mobile platforms. The latter offer a number of
integrated technologies (i.e. positioning systems, accelerometers
and gyroscopes) for tracking human movements and creating
multimodal user interfaces. These interfaces are highly required,
since the absence of visual information representation renders the
employment of the (touch) screen useless. A typical example is
The Songs of North [22], a multiplayer, location-aware audio
game prototype that is playable within a large physical area with
distances more than 10 kilometers.
Similar to audio-books and radio plays, adventure games offer the
capability of effective narration through sound only. But the
audio-games genres are not limited to these categories. They
follow the genre classification applied in the case of video games,
providing a number of titles characterized as action or puzzle
games.
2. THE AUDIO-GAMES CONCEPT: AN
OVERVIEW
The fundamental path for human-machine interaction in audio
games is established through sound and music signals conveyed
within the acoustic channel. The complete set of the signals
employed construct the auditory display. The ultimate goal of a
perceptually efficient auditory display is to achieve a high-degree
of user immersion into the game virtual auditory world [14],
equivalent to the immersion achieved using visual means for
game virtual environments construction. Towards this aim,
sonification techniques should be employed for representing the
data context that corresponds to the game world.
In all the above audio-game categories, obviously sound design is
a critical task for achieving perceptually meaningful auditory
events that are mapped to specific game-play conditions. The
prospective sound designer should carefully consider sound
temporal characteristics, as well as the number of the concurrently
activated sound sources. The user interface should be also
accurate and simple, not cluttered by large amount of parallel
information. Since the majority of audio games are developed by
individual programmers, or small development groups with low or
no funding at all, the above guidelines are frequently difficult to
be adopted. Hence, it is expected that most of the existing, nonprototype titles are mainly based on simplistic scenarios. This is
one of the key factors that explain the increased effort to
By default, sonification employs non-speech audio for conveying
the desired information [15]. Multiple existing sonification
techniques are available, suitable for different application fields.
For example, simple telephone rings and e-mail sonic
notifications represent common every-day auditory user
interfaces. The concept of auditory icons [16] is widely employed
89
implement audio-games based on well-known, legacy grid-based
ones [11]. Moreover, as mentioned previously, due to the
algorithmic nature of the applied rules, these games represent
efficient test platforms for evaluating and re-defining the design
principles that should be followed. And this evaluation tends to
include not only design issues, but extends to sound aesthetics as
well [14], considering them as an equally significant component
of the overall sonic design.
Processing software platform
For the above reasons, in this work we consider a very common
grid-based game: the tic-tac-toe (also known as Os and Xs). The
game implementation was based on a design that considered
combined sound and user interface issues, aiming to particularize
existing
sonification
guidelines
under
an
integrated
implementation framework. The detailed analysis regarding the
application design and implementation are provided in the next
Section.
Player 1
Playerr 2
Logic Module
Earcons
database
Grid Controller
Auditory
Controller
Rollingg pellet
p
interface
2-axis
gyroscope
Binaural
Audio Playback
accelerometer
Arduino Physical Computing
Interface
3. IMPLEMENTATION
3.1 Game architecture
Figure 1. The tic-tac-toe audio game prototype architecture
The game prototype was implemented using the combination of
Processing software and Arduino hardware platforms. Processing
is an open source platform [23], a powerful software sketchbook
and professional production tool used in many fields of audio and
image signal processing, science/technology and arts. Processing
code can be easily exported as Java applet, while it can be also
ported to mobile platforms through the Mobile Processing
environment. Arduino on the other hand is a well-known
microcontroller-based board, suitable for a wide range of physical
computing sensing applications.
As mentioned previously, specific care has been paid for
designing a control interface that enhances the eye-free character
of the game prototype. The key-concept for realizing it was what
the authors named here as “auditory rolling pellet”. Let as assume
the game grid as a two-axis revolving flat surface, with a pellet
rolling on the top of it. A rotation of the grid surface would force
the pellet to roll towards the direction of the specific rotation. If
we also assume that the rolling velocity of the pellet is constant
and independent from the rotation angle in both control axes, then
the user can control the pellet instantaneous position by simply
arranging the two rotation angles.
The overall architecture of the realized audio-game prototype is
illustrated in Figure 1. The core of the complete system is the Grid
and the Auditory Controllers. The first one is responsible for a)
mapping player actions to allowed game-play events (i.e. cell fill
with an “X” or “O” symbol) b) for keeping track of the grid status
(i.e. knowledge of filled or non-filled cells) and c) for triggering
the auditory controller. The latter is responsible for audio
playback, performed by selecting the appropriate earcon,
depending on the information provided by the Grid Controller. All
earcons derived by the design process described in the following
Section are organized into a multimedia database, exclusively
accessed by the Auditory Controller. In order to ensure that all the
required earcons will be reproduced, this module also employs a
First-In-First-Out (FIFO) buffer for storing the playback queue.
Each buffer position corresponds to a time length equal to 1
second, a value that is marginally greater than the maximum
duration of all earcons employed.
The auditory rolling pellet is also attached to an earcon,
depending on the cell grid it stands on a specific time instance (as
well as the type of the cell: filled or non-filled). This earcon has
spatial characteristics (as explained in detail in next Section 3.2)
and is reproduced only once (by the moment the pellet enters the
specific cell); moreover, it is replaced by another earcon, when the
pellet enters a different cell. Motion tracking of the auditory
rolling pellet is also performed by the Grid Controller, taking into
account the input data provided by a connected 2-axis gyroscope.
Additionally, for “X” or “O” symbol placement on an empty grid
cell, a similar approach was followed: the user rapidly shakes his
control equipment, causing a) the auditory rolling pellet to fill the
corresponding cell with the appropriate mark assigned to the
specific player and b) the Auditory Controller to select the
corresponding selection earcon. This shaking movement is
identified by an accelerometer communicating with the rolling
pellet interface.
The game prototype supports both single and two player modes.
When set to single player mode, the logic module is responsible
for additionally acting as a player, following specific tic-tac-toe
programmable rules. On the other hand, when two human players
are active, the logic module is simply responsible for checking if a
winning triplet has occurred or if the game is over without a
winner, while it also keeps track of the total score. This score is
vocally announced to the players after the end of every game.
When the auditory rolling pellet reaches the grid boundaries, it
“crashes” on a virtual wall causing it to stop rolling. A
corresponding spatial roll-stop auditory icon is then reproduced,
informing the user about the current navigation conditions.
Finally, it should be also noted that for monitoring purposes, the
prototype additionally offered visual output (see the screen-shot
shown in Figure 2). However, this visual output was used only for
implementation issues and it was not activated during the
demonstration / assessment period described in Section 4.
90
The empty-cell earcon was further processed in order to produce
eight, spatially distributed replicas (denoted here as emptyij),
following the angular setup presented in Figure 4. Spatial
characteristics were introduced using binaural filtering of the
original earcons for each grid-cell, using the horizontal angle φ
value appeared in this Figure. The same procedure was also
applied for each of the xij, oij, xsij and osij earcons, producing the
final earcons waveforms stored in the earcons database. It should
be also noted that, since the center of the (1,1) grid-cell coincides
with the player virtual position, the earcons for this cell were not
binaural processed. Instead, they were produced by directly
rendering the original earcons mono signals to a stereo waveform
with equalized loudness.
boundary01
x00
xs00
x01
xs01
x02
xs02
o00
os00
o01
os01
o02
os02
Figure 2. The application-monitoring client
empty01
empty00
empty02
boundary04
During the sonic / earcons creation phase, we have followed an
iterative design procedure. Following the guidelines proposed in
[18], we determined the grid conditions that should be mapped to
specific earcons. The first obvious cases were the presence of “X”
and “O” symbols into the grid cells. Assuming a grid-indexing
scheme as the one illustrated in Figure 3, the corresponding
earcons were denoted as xij and oij respectively, where i and j are
the grid coordinate indices. These earcons were constructed as
simple notes motives with variable pitch and rhythm parameters,
using the following two rules:
x10
xs10
x11
xs11
x12
xs12
o10
os10
o11
os11
o12
os12
empty10
(a) For consecutively increasing j-index values, the
fundamental pitch frequency is doubled
empty11
empty12
x20
xs20
x21
xs21
x22
xs22
o20
os20
o21
os21
o22
os22
empty20
boundary02
3.2 Sonic design details
empty21
empty22
boundary03
(b) The rhythm of the overall note structure is proportional
to the i-index values. That is, for i = 2, the constructed
earcon has a rhythm value twice to the one measured for
i = 1.
Figure 3. Employed earcons map
As mentioned previously, these rules are well in accordance to the
guidelines provided in [18]. Moreover, as it will be explained later
in this Section, these earcons were additionally spatially placed
using binaural processing. These earcons were aimed to be
reproduced once when the auditory rolling pellet was located over
the corresponding (i,j) grid cell. However, what it became evident
during the very initial design phase, was the necessity for defining
extra earcons for representing the placement of a tic-tac-toe mark
into a cell grid. Hence, the xsij and osij earcons were additionally
defined, constructed by mixing the original xij and oij ones with a
very short impulsive click sound, which was contextually mapped
to grid cell successful filling.
(i,j)=(0,0)
(i,j)=(0,1)
(i,j)=(0,2)
φ=315ο
φ=0ο
φ=45ο
(i,j)=(1,0)
(i,j)=(1,1)
(i,j)=(1,2)
φ=270ο
Using the above set of earcons, some initial evaluative tests were
performed, regarding the audio-game playability. It was again
found that the absence of mapping an empty cell to a specific
earcon caused significant perceptual confusion to the majority of
the test-players, as they could not create an accurate impression of
the current grid status. Hence, an empty cell earcon was also
defined, constructed as a smoothed, low intensity and very short
duration “tapping” sound.
φ=90ο
(i,j)=(2,0)
(i,j)=(2,1)
(i,j)=(2,2)
φ=225ο
φ=180ο
φ=135ο
Figure 4. Sonic design grid identification and spatialization
parameters
91
followed. During the tests, the human subjects used an interface as
shown in Figure 6, similar to the previous one. The user in this
case was requested to select a state, namely “X”, “O” or “empty”
for each grid position following the scenario played back. Five
testing scenarios were created: Scenarios one to three consist of
the earcons for occupied positions (“X” or “O”), while the
scenarios four and five moreover include the “empty” earcons.
The three first scenarios present respectively four, two and six
occupied grid cells. The scenarios’ description is summed in
Table 1.
Another significant issue tracked prior to the systematic
evaluation of the overall sonic design and game-play was the fact
that the rolling pellet movement should be limited to the game
grid dimensions. If this restriction were not applied, a constant
rotation of the grid flat surface would cause the pellet to roll
outside the grid physical dimensions. Hence, the concept of the
“rolling-stop” virtual wall was introduced, and a smoothed,
crashing auditory icon was created. This audio sample was again
processed using binaural technology, aiming to spatially locate it
towards the four grid edges (see Figure 3).
4. ASSESMENT AND RESULTS
In order to assess the efficiency of the tic-ta-toe audio – game
prototype realization, we have organized a two-level subjective
test. The first level considered mainly the sonic design efficiency,
taking into account the particular requirements imposed by the
game scenario and rules. In these tests, 20 non-audio expert adult
subjects participated. Specifically, the subjects were first given a
demonstration of the system, including a detailed explanation of
the correlation of the different earcons to the normal visual
application state and an analytic description of the interaction /
navigation possibilities. At the second test level, this noninteractive demonstration session was followed by a limited fiveminute period during which the subjects were allowed to play
several sets of tic-tac-toe.
Figure 6. Application interface, reception of auditory scenario
Figure 7 summarizes the results obtained from the first testing
application. The percentage shown represents the correct grid
placement for the “empty”, “O” and “X” group of earcons. While
slightly different earcons are used as a confirmation of a
placement action and information for an occupied block, these are
concatenated under their respective symbol. Figure 8 also
illustrates the results obtained from the second testing application.
The red set of columns shows the percentage of the audio
scenarios that have been accurately mapped, while the blue set of
columns show the percentage of correct grid placement per block
within each scenario.
In order to investigate our earcons design efficiency during the
first test level, two additional software applications were
developed and utilized. The first one aimed to measure the
spatialization accuracy for each set of earcons designed. Each
subject has used a graphical interface shown in Figure 5,
providing buttons for playing back the sound, proceeding to the
next sample, as well as nine buttons corresponding to the tic-tactoe nine grid-cells positions. Each sample could be repeatedly
played back, however once the “Next” button was pressed, the
user could not repeat the test for the previous sound. Moreover,
the user was prompted to select the perceived position for each
sample before proceeding to the next one. Each audible sample
corresponding to a different cell state or action (i.e. “X” or “O”
mark placement) was presented twice, in a fully randomized
order.
Table 1. Test scenario summary
“X”
Occurrences
“O”
Occurrences
“Empty”
1
2
2
NO
2
1
1
NO
3
3
3
NO
4
2
2
YES
5
1
1
YES
Scenario
Earcons
From these test results, the following conclusions can be briefly
drawn:
(a) Less than 30% of the “empty” earcons are correctly
placed; however, more than 70% of the “O” and “X”
cues are accurately positioned, indicating the strong
association between the actual earcon design and their
spatial perception.
Figure 5. Application interface, reception of earcon
localization
(b) The results obtained for scenarios 1 and 2 differ
minimally compared to the results for scenarios 4 and 5,
respectively: Therefore, the presence of “empty”
The second test application was designed to examine the user
ability to imprint a given grid state from a specific auditory
display state. This is a very significant assessment, since it defines
the playability of the game in terms of the earcons’ design
92
accelerometer and gyroscopic sensors. Towards this aim, earcons
were employed as the fundamental means of sonifying the
information required to construct the necessary auditory display.
Earcons design was an iterative process, leading to a robust and
efficient set of spatialized earcons. In detail, although earcons
concurrent / parallel representation was not required, however, the
spatial characteristics of the sonic motives enhance the degree of
user immersion. Moreover, the final sonic design also considered
the user control mechanism developed and employed, providing
an integrated, multimodal interface for playing the game.
earcons does not seem to affect the subject mapping
capability.
(c) Simple auditory display scenarios, including a small
number of occupied blocks are easily mapped, with an
accuracy percentage up to 80%.
(d) The minimum percentage of accuracy for the auditory
display mapping on a scenario basis is 45%; on a grid
block basis the minimum is 75%. Thus, a limited
number of erroneous choices lead to mapped scenarios
being dismissed.
The efficiency of the audio-game prototype was assessed
following a sequence of subjective tests. Initially, the human
subjects were briefly interviewed regarding the tic-tac-toe game
interface employed. Most pointed out that considerable effort was
required to confirm the possible position selection through the
relevant audio playback, which is also established from the
assessment results. Moreover, according to the majority of users,
considerable skill was necessary in order to visualize the game
state provide the next input choice; it is obvious that the
complexity increases through each game step. Additionally, while
the discrimination between earcons for “X” and “O” symbols was
apparent, some found it difficult to specifically identify the “X”
and “O” symbols during the testing phase. Finally, almost all
users acknowledged that game play is plausible and enjoyable,
requiring though focused attention.
Figure 7. Average correct placement for the earcons
employed, nine positions per set
Future goals include further testing the application usability, by
tracking intended and actual game inputs as well as correlating the
time required for visual and audio game play, as a metric of the
possible effort required. Moreover, testing with visually impaired
subjects is expected to substantially differentiate the assessment
results, given their increased abilities to visualize the game grid
and actions. Finally, the possibility of letting the users to define
their own earcons mappings from a pre-defined, limited set can be
investigated, aiming to an enhanced usability approach.
6. ACKNOWLEDGMENTS
In addition to the authors, Mr. Nicolaos Grigoriou was also
involved with the sonic design realization. The authors wish to
thank him for his contribution.
7. REFERENCES
Figure 8. Average correct placement for the five test scenarios
[1] Stapleton, A. 2004. Serious Games: Serious Opportunities. In
Proceedings Of the Australian Game Developers’
Conference. (Melbourne, VIC).
Finally, apart from the above two fully controlled test cases,
during the second level of tests, the users were allowed to play
multiple tic-tac-toe sessions within a limited time interval. While
at the beginning this was a relatively difficult task (as it mainly
resulted into random user actions and symbol placements on the
tic-ta-toe grid area), it turned out that after a maximum of three
repetitions, the game-play was natural and feasible. Moreover, all
the participated subjects responded positively to the question
regarding the ease of game-control through the auditory rolling
pellet mechanism, while they also verified that the proposed
earcons sonic design conceptually fits to the employed navigation
mechanism.
[2] Friberg, J. and Gärdenfors, D. 2004. Audio games: new
perspectives on game audio. In Proceedings of the
International Conference on Advances in computer
entertainment technology, DOI=10.1145/1067343.1067361.
148 – 154.
[3] Ben-Tal, O., Berger, J., Cook, B., Daniels, M., Scavone, G.
2002. SonART : The Sonification Application Research
Toolbox. In Proceedings of the 8th International Conference
on Auditory Display, (Kyoto, Japan, 2002). ICAD’02.
[4] Lodha, S.K., Beahan, J., Heppe, T., Joseph, A. and ZaneUlman, B. 1997. MUSE: A Musical Data Sonification
Toolkit. In Proceedings of the 4th International Conference
on Auditory Display, (Palo Alto, California, November 2 – 5,
1997). ICAD’97.
5. CONCLUSIONS
In this work we demonstrate and assess a tic-tac-toe game
adaption to an audio-only environment, suitable to be supported
by any mobile platform equipped by specific movement-tracking
93
International Conference on Auditory Display, (Limerick,
Ireland, July 6 – 9, 2005). ICAD’05. 92 – 98.
[5] Dingler, T., Lindsay, J. and Walker, B.N. 2008. Learnability
of Sound Cues for Environmental Features: Auditory Icons,
Earcons, Spearcons and Speech. In Proceedings of the 14th
International Conference on Auditory Display, (Paris,
France, June 24 – 28, 2008). ICAD’08.
[15] Kramer, G. 1998. Sonification Report: Status of the Field and
Research Agenda. NSF Sonification White Paper.
[16] Gaver, W. 1986. Auditory Icons: Using sound in computer
interfaces. Human Computer Interaction 2, 2, 167-177.
[6] Hermann, T. and Ritter, H. 1999. Listen to your Data:
Model-Based Sonification for Data Analysis. In Advances in
intelligent computing and multimedia systems. 189-194.
[17] Blattner, M.M., Sumikawa, D.A. and Greenberg, R.M. 1989.
Earcons and Icons: Their Structure and Common Design
Principles. ACM SIGCHI Bulletin 21, 1 (July 1989), 123124. DOI = http://dx.doi.org/10.1145/67880.1046599.
[7] Stockman, T. 2005. Interactive sonification of spread-sheets.
In Proceedings of the International Conference on Auditory
Display, (Limerick, Ireland, July 6 – 9, 2005). ICAD’05.
[18] McGookin, D.K. and Brewster, S.A. 2004. Empirically
Derived Guidelines for the Presentation of Concurrent
Earcons. In Proceedings of the HCI2004, (Leeds, UK,
September 6-10, 2004).
[8] Brown, L. and Brewster, S.A. 2003. Drawing by ear:
Interpreting sonified line graphs. In Proceedings of the
International Conference on Auditory Display, (Boston,
Massachusetts, July 6-9, 2003). ICAD’03, 152–156.
[19] Blauert, J. 1997. Spatial Hearing (revised edition). The MIT
Press, Cambridge, Massachusetts.
[9] Papworth, N. 2010. iSpooks: an Audio focused Game design.
In Proceedings of the 5th AudioMostly Conference, (Piteå,
Sweden, September 15 – 17, 2010). AM’10. 80 – 87.
[20] Moustakas, N., Floros, A. and Kanellopoulos, N. 2009.
Eidola: An Interactive Augmented Reality Audio-Game
Prototype. In Proceedings of the Audio Engineering Society
127th Convention, (New York, October 9 - 12 2009).
AES127. Preprint 7872
[10] Lopez, M. and Pauletto, S. 2010. The Sound Machine: A
Study in Storytelling Through Sound Design. In Proceedings
of the 5th AudioMostly Conference, (Piteå, Sweden,
September 15 – 17, 2010). AM’10. 63 – 70.
[21] Moustakas, N., Floros, A. and Grigoriou, N. 2011 Interactive
Audio Realities: An Augmented / Mixed Reality Audio
Game prototype. In Proceedings of the Audio Engineering
Society 130th Convention, (London, May 12 – 16, 2011).
AES130.
[11] Nickerson, L. V. and Hermann, T. 2008. Interactive
Sonification of Grid-based Games. In Proceedings of the 3rd
AudioMostly Conference, (Piteå, Sweden, October 22 – 23,
2008). AM’08. 27 – 34.
[12] The Arduino physical computing interface:
http://www.arduino.cc
[22] Ekman, I. 2007. Sound-based Gaming for Sighted Audiences
– Experiences from a Mobile Multiplayer Location Aware
Game. In Proceedings of the 2nd AudioMostly Conference,
(Röntgenbau, Ilmenau, Germany, September 27 – 28, 2007).
AM’07. 149 – 153.
[13] Targett, S. and Fernström, M. 2003. Audio Games: Fun for
All? All for Fun?. In Proceedings of the International
Conference on Auditory Display (Boston, Massachusetts,
July 6-9, 2003). ICAD’03. 216 – 219.
[23] The processing open source platform:
http://www.processing.org.
[14] Rober, N. and Masuch, M. 2005. Leaving the Screen: New
Perspectives in Audio-only Gaming. In Proceedings of the
94