Midterm Report - Music Impro Application

The Music Impro App
Midterm Report
INF5261 - Development of mobile information systems and services
Spring 2011 - The University of Oslo : Department of Informatics
In connection with the RHYME group
Student group:
Jan Ole Skotterud
Tommy Madsen
Geirr Sethre
Kjartan Vestvik
Joakim Bording
Content
´The Music Impro App
Midterm Report
Content
Introduction
Problem Space
Goals
Users
Psychographics and demographics
Personas
Review of literature
Review of existing technologies
Reactable (iPhone, iPad ,Multi touch-table)
Beatmaker (iPhone, iPad)
Gyrosynth (iPhone)
RjDj (iPhone, iPad, Android under development)
Aura flux (iPhone, iPad)
Seline/Seline HD (iPhone, iPad)
Bloom (iPhone, iPod touch)
Conceptual Model
Design process
Expert opinions
User Centered Design
Prototypes
Max/MSP
Technical Future Work
Conceptual Future Work
References
Introduction
“Without music, life would be a mistake” (Friedrich Nietzsche)
In alignment with the ongoing the Rhyme Project, our idea will be contributing to the objective
in the project description that state: “Develop knowledge through design of new concepts of
Internet of Things. Things that facilitate play, communication and co-creation, between people
with and without severe disabilities, on equal terms, in different communicative situations.”
(About RHYME, 2011). The intention of the Music Impro App project is to build an application
related to these goals. This includes challenges as to facilitate co-creation of music in settings
that has a network connection, using common devices like regular phones and pads. We
believe this will be possible by using built in technology like the accelerometer, gyroscope,
compass etc. With the help of sensor technology, we aim to turn the users mobile device into
an instrument that will be easy to play with and intuitive to use. There are various technical
challenges that lies ahead, but if we overcome these, the application will offer a variety of
musical experiences by facilitating co-creation in a meaningful way through multimodal ICT.
Problem Space
Traditional music creation needs practice and skill. To play an instrument like for instance
a flute, you first need to know and understand some theory behind how the instrument is
supposed to be used. Place your fingers over the holes and release them in order from bottom
and up to produce a normal musical scale. Second you need practice to master breathing and
blowing techniques and good finger control to produce clean and pleasant sound from the
instrument. And third, practice and skill needs to be combined in how to create combinations
of these sounds to melodies that sound like music as we know it today. Everybody who have
had flute classes at their primary school know that this does not come easy and the first musical
results does not sound nice.
The ability to master an instrument and together with others play music is said to be a joyful
experience. Apart from the entertainment it has also been shown that music may have positive
effects on health. Music can be used to reduce the amount of pain experienced, but may also
provides a ways of communication for children or elders with disabilities (Robertson, 2010).
Robertson (2010) describes a case where a blind child with autism finds a love in music that
helps him communicate. When asked of what he thinks of his mother he sits down by the piano
and play Twinkle, twinkle, little star. Since most traditional instrument requires much practice
and skill this may become too much of a challenge for some disabled children, and the feeling
of mastery and the social collaboration of music is lost. Inspired by the Rhyme project we want
to use universal design in a way that makes co-creation of music possible without the need for
practice or specific skills.
1
When dedicated treatment tools based on information technology is designed and produced,
it may be dependent on dedicated hardware made especially for the system. These may be
controllers, sensors, or the main device itself. The use situation may require special needs of
the hardware that makes it necessary to make dedicated solutions that supports those needs.
The Orfi prototypes from the Rhyme project had sensors, speakers and transmitters built into
pillows. The designers did not want to put limitations on the use of these pillows and therefore
wanted to make the hardware study enough to withstand being trown into walls or to support the
weight of adult users using the pillows to sit (private conversation, March 2011).
But dedicated hardware comes with a cost. To distribute the final product you are dependent
on a manufacturing line of the dedicated hardware, distributors who are willing to buy a stock of
your product and places that sell and share the knowledge of your product to the end customer.
In the last years, with the introduction of smart phones on the market, the use of “apps” (small
applications that run on a mobile device) has exploded. One of the factors of success for these
applications is the centralized marketplace for each OS that gives a short and easy distribution
line from developers to end users. Final developed products are suddenly available to all users
of that platform without any extra cost for the developer. We wanted to experiment with that
open platform to see how the hardware in today's smart phones can be utilized for this kind of
systems. Even thought the hardware may not be ideal in all cases we believe the availability of
the possible final product will outweigh the limitations.
Music applications made for smart phones are usually dependent on the touch screen interface
as the main source of interaction. The small screens require good motoric precision to operate
and normal sight to distinguish the options and the appropriate action to take. We think this
approach does not coincide with universal design in that it exclude users with vision and/or
motoric disabilities. The use of sensor input like acceleration and orientation may be a less
demanding interaction technique and open up for more users if used well.
Based on our review of current music applications for mobile devices we will state that they
seldom supports co-creation. Music may be shared and possibly adapted and recreated, but
systems that supports live co-creation of music have we yet not found. By playing the controlled
music out loud from each device, co-creation is often possible, but it is much more dependent
on the skill of the users and their may be some situations where it is almost impossible to let
the loops sync and get everybody to play in the same key. A system designed for co-creation is
needed.
Goals
The Rhyme project, who we are supported by, has stated two main goals at their homepage:
1. Radically improve the health and well-being of people with severe disabilities, through
a new treatment paradigm based upon collaborative, tangible, net based, “smart
objects”, with multimedia capabilities
2. Contribute to the emerging body of knowledge in interface design, specifically within
2
the area of networked social co-creation, based on multimodal interaction through the
Internet of Things.
(About RHYME, 2011):
These goals coincide with the underlying goals of our related project, and we hope to contribute
to some of them. Based on the universal design principles we want to let our system be usable
by the general public, but also include users with disabilities and special limitations. To narrow
the scope we have decided to use open application frameworks for smart phones, currently the
Google Android platform. We also focus on live co-creation of music trough gesture interaction,
where we use the movement of the device itself as the main form of interaction.
Users
As well as defining specific users is valuable in regards to defining the scope of what is to be
designed, this can potentially be forcing unwanted limitations on a project before you know
your target group well enough. Since music is such a universal language of communication, we
seek to avoid limitations not just to reach a broad audience, but also to make a user experience
possible for user groups such as the elderly and the disabled. Our approach to the users will
thus be based on the principles of the discipline known as universal design. “Universal design
means simply designing all products, buildings and exterior spaces to be usable by all people to
the greatest extent possible” (Mace et.al 1991:2).
Based on this, we need to make sure our application will be usable for people with disabilities as
well as people without disabilities. The target group is basically only limited to people that can
benefit from communicating, experimenting and playing with music, and wish to do so. But when
it comes to user testing on potential users, we aim to meet the least common multiple in order to
follow the principles of universal design. This means in practice that testing should be done on
users with special needs, like old people, kids and if possible people with disabilities. Because
user testing on people with disabilities raises ethical as well as practical challenges, we will in
the early stages of the development target kids and older people.
Psychographics and demographics
Being so early in the process, it might be too premature to define a specific disability, age,
demography or education, so called demographics. Although “demographics aren’t the only
way you can look at your users” (Garrett 2011:44). At this stage we will rather investigate the
psychographic profiling (Garrett 2011:44) considering attitudes and perceptions amongst kids
and elderly people. By this we seek ensure if we are working in the right direction according to
our users cognitive conception of the model. This does not mean that the demographics will
be overlooked, but will be a subject of matter later in the process, after initial prototyping and
proper user research has been accomplished.
The scope of our users from a psychographic profiling view is basically users who:
3
●
●
●
●
●
●
●
like/enjoy or have an interest in music
are willing to experiment and perform
are willing to perform within a group (co-creation)
are able to hold or move the device
are be able to hear sound, or at least feel it
have access to a mobile device with a compatible operating system
have access to a wi-fi network
One important issue that will need to be addressed, is that the co-creative aspect itself means
that our users can be both expert, casual, frequent and novice at the same time (I.e. a father
using the application with an authentic child). If we are to follow basic principles of universal
design, the user testing will need to have additional focus on issues like this, as well as on
specific impairments in regards to requirements and usability issues.
At the bottom of our approach towards users lies a thought of equality. For us the goal will
therefore be to design based on common trends, using common devices etc, for the sole
purpose to avoid designing for an idea of disability, of which, could be a reason to reject the
design (Plos & Buisine 2006).
Personas
One approach to work around limitations related to users will be to create personas. This can
in fact can be a valuable asset if one is designing for a user group where user testing will be
impossible or limited. “Personas are rich descriptions of typical users of the product under
development that the designers can focus on and design the product for” (Preece et.al. 2007).
This opens up possibilities to use existing data from similar projects, such as Rhyme, to create
fictive persons with a set of skills, attitudes, tasks and environment, and thus enables the
possibility to build on empirical knowledge for users that are not within our reach.
Review of literature
There has been done a great deal of research on music applications ,and how the mobile can
be used by amateurs to create music. In a project called ’Mobile Gesture Music Instrument’
they explore how the accelerometer axis can be used as a new dimension of music generation.
They focus on using the phone as an easy to use musical instrument that amateurs can use to
make music. In the same way that the MoGMI project wants to explore an interface that can be
learned in a very short time, we in our project also want the users to be able to understand how
it works by just playing with for a couple of minutes. In their project they test out two different
applications. One of them is letting the users control the pitch and amplitude of the instrument,
and the other application is letting the users use pre-recorded samples which are played
depending on how they move their phone. We want to try to explore both of these aspects in
our project. In their user study they found out that mostly the users enjoyed the first application
best because it gave the users most feedback and control over what they did (Dekel et.al,
2008). Another project that has explored the use of mobile devices for music creation is MoPhO
4
(Mobile Phone Orchestra of CCRMA). Their research is located around the term generic design,
meaning a platform that is not designed for a specific performance but designs that are flexible
and variable in use. Platforms that exists simultaneously with multiple and repetetive tasks and
also universal to the extend that it allows a range of artistic possibilities. One of the challenges
they mention to the generic use of mobile phones as a musical instrument is flexible local and
remote networking for performances (Essl et al: 2008). In our project we want to explore the
possibility of several users connecting their mobile phones through a wireless network for
generic music synchronisation with the use of accelerometer and gyroscope sensors. We know
that this will be our biggest challenge, because of the lack of existing material to work with.
Mobile music technology has been a growing research field within the last decade and because
of the increasing popularity of context-aware systems and social networking applications the
use of mobile phones as a music platform is growing. Embodied music cognition is a term that
explores the idea where the bodily movement is an essential part of our understanding and
experience of music. Alexander Refsum Jensenius in his article about ‘Music and Movement
in Mobile Music Technology’ opens up some challenges related to music and movement in
mobile technology. He talks about the difference between mobile(large external space) and
immobile(small external space) and how this influence our understanding of action-sound
relationships. What he means by this is that we mentally simulate how a sound was produced
while percepting it. This relationship between action and sound has been emerged as a
very important research topic in HCI and the exploration of new digital music –instruments
(Jensenius: 2008). This is an important aspect in our own research project where we have
to develop an application that both take into consideration the physical space (mobile-large
external space) of the musical performance, and the response of the action performed by the
user and thus the sound generated by the action.
Review of existing technologies
In the past few years the number of music applications available for Android and iPhone has
increased enormously. The development of new technologies in mobile services is opening for
new possibilities for exploratory music making. We are here going to do a short review of some
of the most exciting music applications out there today.
Reactable (iPhone, iPad ,Multi touch-table)
Reactable is a new musical instrument that
allows you to see the music while playing with it.
Its unique combination of soundgeneration,
manipulation, visuals and tangible interaction
creates new expressive possibilities of tradtional
instruments for those who are working with new
technologies. Reactable is a result of many years
of research within the field of Human-Computer
Interaction, and because the software and
5
hardware components are designed together it
differs from all the other existing technologies out
there. The reactable interface is intuitive, tangible (physical blocks are moved around the
multitouch-table), visual and collaborative. This multitouch-table allows several people to create
music together using the same interface (Reactable 2011). The way reactable opens the
possibilities for starting loops, and adding different manipulative sound modules when
collaborating, makes this an interesting application for inspiration for our own project.
Beatmaker (iPhone, iPad)
While reactable is an interactive instrument that
could be used in a live-setting, beatmaker is
more of an advanced home studio application.
The application allows you to create beats and
melodies using a range of different
instruments, and audio effects. Beatmaker is
structured much like similar recording
applications like Logic and Cubase for
stationary PC´s, but is easier to use for
amateur musicians. Beatmaker allows the user
to quickly record a beat using different
samples, and easily record a new beat or a synth on top of that. It differs from reactable
beacuse of the user´s possibility of editing the recordings and the way interaction is conducted
(http://www.intua.net/products/beatmaker2).
Gyrosynth (iPhone)
Gyrosynth is an music application that uses the
gyroscope on the iPhone 4. The gyroscope is
motion sensors that are build into the phone. It
detects the rotation of the phone and opens up
new interactive possibilities both for development
of music and game applications on your mobile
device. The gyrosynth uses these sensors to
allow the user to control the frequency and
modulation of the chosen synthesiser. One of the
advantages of the gyrosynth is the immediate
feedback to the user, but there are also several
limitiations of the application:
●
●
●
●
●
Few available synthesisers
Doesn´t support co-creation
Abstract sounds
No synchronisation between phones
Doesn´t give the user the feeling of mastering and creating actual music
6
But because the application uses these motion sensors build inside the phone, the application
is interesting for our project. We also want to build an application that uses the movement of the
phone, but adding a new dimension by opening for co-creation between several people.
RjDj (iPhone, iPad, Android under development)
RjDj is an innovative application that introduces the
concept of reactive music. It reacts to the sounds around
you and creates new interesting sounds based on this.
The app takes input from the microphone and analyzes
and processes these sounds through different filters. It lets
users dynamically create sounds and music from their
environment. The sounds can be described as constantly
evolving, atmospheric and rhythmic patterns.
When you use RjDj, you listen to what is called a “scene”,
which is a collection of filters, processing and even
sounds, that shapes the sound coming into the
microphone. There are several different scenes included in
the application, and more can be downloaded from the net. You can even create your own
scenes through a desktop program called “RJC1000”.
This application relies heavily on voice/sound input, so it is easy to use for everyone. Even
several people can together shape the sound, but it can be complicated for everyone to listen
at the same time, as the app relies on the use of headphones. Another limitation of RjDj is that
it is not able to generate other musical sounds than the one you make yourself, as it has no
synthesizer built in.
Aura flux (iPhone, iPad)
Aura Flux is an ambient music creating application
that lets the user dynamically create looping sounds
by touching the screen. The user can draw lines
and figures on the screen to shape the music.
Sound is produced by different kinds of “nodes” that
can be connected together. Nodes can either
generate sound (generator node) or react to pulses
(reactor node) coming from other nodes, and the
nodes can be moved around the screen and be
interconnected in many different ways. There are a
total of 48 individual sounds to choose from, and the overall mood of a scene can be changed.
Each node can be edited, and there are lots of parameters to edit.
This application can be really inspiring to use, it is always easy to come up with something
new, and the sounds and also the visual feedback is nice. However, there is no support for co-
7
creation through connecting several devices and using them at once.
Seline/Seline HD (iPhone, iPad)
Seline is a musical instrument application for the
iPhone and the iPad. It basically lets you play an
instrument sound (like flute, cello, violin, synthesizer,
etc) via the touch interface on the device´s screen.
The interface is structured into different squares,
representing the notes, and the user can choose a
scale to play over. In this way, the instrument is
easier to play, as it gets impossible to play “false”
or “wrong” notes.
This application is used by “The iPad Orchestra”,
which features four musicians each playing one classical instrument on their iPad running
Seline HD.
Bloom (iPhone, iPod touch)
Bloom is created in collaboration with ambient music pioneer Brian
Eno. It features a simple interface that lets users tap the screen to
create notes. The notes will play back in a looping manner, showing
growing and fading dots on the screen as they play. Notes are
organised into scales, and this can be selected from several
predefined ones. The pitch of the notes is laid out on the screen going
from deep notes on one side, to higher notes towards the other side.
There are a lot of ambient effects on the notes you play, making it
sound really pleasing and calm. The app also supports sensors; you
can shake the phone to erase the notes.
Conceptual Model
In order to describe what the system is going to be like for the user, we’ll here try to
conceptualize the design space. One approach to this is to perform an abstraction that gives a
general idea of what the users can do with our application, and which concepts are involved to
understand how to interact with it (Preece et.al. 2007).
To understand what we intend to design, one must think of each device as a node in a network.
The idea of co-creation will demand at least two or more nodes connected at the same time.
All nodes are synced, and the system configuration does allow a node to make output in a
different key than the rest of the nodes. Through the applications GUI a user should be able to
choose an instrument. One node in the group of nodes will play the role as a master-node. The
master node will typically be the rhythm section, with options related to musical categories, i.e.
funky drum loop, jazzy beat etc. (technical issues regarding output is further discussed under
prototyping).
8
We’ll use the general perception of playing in a band as a metaphor to describe interactivity
within the group, since we in many ways seek to recreate co-creativity compared to a band
performance. One way to describe how the conceptual model is to be understood, can be done
by picturing a scenario:
Just like in a band, a user have the option to be the rhythm section (controlling the master
node), or control a guitar, piano or flue etc. What part you’d like to play in the band will be
possible to access through the GUI.
Let’s say one user initiate a drum loop on the master node by choosing drums and then
pressing a chosen loop icon on the device, that could i.e be a funky rhythm. Then we have a
loop with a funky rhythm going. Then the same person initiates a second loop, a bass ostinato
chosen from the same funk category. A second user, holding a regular node, that has chosen
the guitar (i.e. with a funky sound) can then start to improvise upon this by moving the phone i.e.
like a real guitar player would have done with his right hand (not being left handed of course),
and then produce a funky guitar sound that synced with the rhythm is in perfect pitch with the
rest of the groove. Then a third person, maybe choosing a single tone piano output, might want
to move his phone around and create improvised sound upon the groove produced by the rest
of the “band”. The piano would run up and down a pre-defined scale (set by the system) and
i.e. play the pentathlon scale upwards or downwards, depending on the movements from the
person generating the output. The pentatlhlon scale playing will be matched to the basic chord
progression running in the background, so no matter how you move the phone, the scale will be
playing in harmony with the rhythm section and the guitar.
The conceptual model as described above is just to give a general idea of the concept behind
our application, and what could be done with it by potential users. User testing involving
prototypes will be required to investigate further if such a scenario will be realistic to achieve
and what kind of possibilities and limitations we need to consider to improve our prototype.
Design process
Based on an initial vision this project started with loose brainstorming sessions and technical
reviews of what technologies and applications that existed. A large part of our group members
had experience in playing music, but we had little experience in programming it. Instead of
to start working with concepts and paper mock-ups we felt it necessary to begin research
which technologies existed and how music creation could most easily be conducted on mobile
platforms.
Expert opinions
This first phase of the project has been mostly technical related. We have made technology
reviews and developed a prototype framework that can be used to test ideas on users directly.
We have talked with and got in contact with a lot of people in the field with expertise in music
programming and mobile development. This has been teachers and professors, members
9
of the Rhyme project team, developers of relevant distributed applications, as well as fellow
master students with the expertise we needed. Without all the great feedback and help the
project would have wound up with a slow start. We did not meet anyone who had done music
programming on the Android platform itself but some referenced to technologies like Max/
MSP and Processing for programming on traditional computers. We were also recommended
using Arduino as an early stage of hardware prototyping if sensor data served to be difficult
to get from the Android platform. We had also help with the providing of example applications
that showed us how we could get the different technologies to talk together. As soon as we
had some understanding of the possibilities and limitations of the technology and a working
framework for prototyping, we could start working on concepts, scenarios and interaction
techniques and do user testing.
User Centered Design
We plan to use a user-centered perspective in the further development of our prototypes. Since
we are dealing with sophisticated interaction methods without any clear design reference, we
have found it beneficial to make high fidelity prototypes as working Android applications instead
of starting with low fidelity paper prototypes (Kangas and Kinnunen, 2005). By using more
realistic prototypes we hope to gain more from user testing and get a clearer discussion around
concepts and ideas that can be made more tangible and concrete than without.
Prototypes
In our initial meetings we talked about creating a prototype with Arduino to make a prof of
concept, but when we found out how easy it is to get simple result with android, we skipped
Arduino and went straight on with android programing. The concept we had in mind required
a phone app where the user get the option of choosing an instrument to play, This found app
then needed to register sensor data and send this data over a WiFi network to a computer that
produce the actual music. We decided on using Max/MSP as the basis for music programming.
Our first prototype was a quick made mock up with made whith Google’s App Inventor. The
application didn’t do much except to retrieve data from the accelerometer sensor, the orientation
sensor and the magnetic compass. This system did not support manipulation of sound in any
way, or sending sensor data over WiFi. And since App Inventor don’t gives you the code it
generates we couldn’t make use of anything from this prototype in later prototypes, so this was
a dead end.
10
Here is a screenshot form the App Inventor interface and from the generated app.
To get started with the second prototype we got a piece of code from Matt Benatan, the creator
of the app “Max/MSP Control” who is public available (http://www.appbrain.com/app/max-mspcontrol/com.maxmspcontrol). The code we got from him showed us how to catch sensor data
and send it over the network to Max/MSP. We picked the bare essentials out of this code and
modified it to fit our needs. We ended up with a small app that catches and display sensor
data from the accelerometer sensor, the orientation sensor and the magnetic compass. You
can also chose a instrument from a list and choose if you want to play music by manipulating
accelerometer data or orientation data by moving the phone around. The app sends sensor data
over WiFi to a listening computer that is running a Max/MSP patch, and the instrument that is
chosen decides which port the sensor data will be sent on. The current Max patch doesn't do
anything besides displaying the data it receives.
Screenshot from the first iteration of the second prototype
In the next iteration of the application we did some changes under the hood, in the GUI and
11
settings part. We added intents in the application so we could make the application more user
friendly and keeping it from eating up to much of the phones memory. We also implemented
a little algorithm that stops the phone from updating the sensor data unless it changes with a
given threshold, this we did to keep it from sending to much data over the nettwork, and so the
phones CPU wont need to work as much. In the GUI we made more distinguish element groups
so it’s easier to see which elements that belongs to each other. The IP address, which could
be a little difficult to enter with the four sliders can now be chosen to be automatically set, but
this requires Internet access. The app queries a server for the last registered IP address and
uses this one to broadcast sensor data. To distribute those IP addresses we wrote a small Java
application that can run from the computer running Max/MSP, and it send the IP address to a
server where it is stored and ready to be retrieved by the phones.
Screenshots from the second iteration of the second prototype
The Max/MSP patch that we mentioned earlier is still in a early alpha phase and don’t have
much functionality. It can receive data on ten different ports, each one have the role of one
instrument. This allows up to ten people to play together, thus we only have tested with two
players at this time. At this time there is only implemented the possibility to play the piano by
using the accelerometer X-axis value, and it doesn't sound that great at the time.
In the next iteration of the app we implemented a lot more internal magic and better looks. We
implemented several running Threads in the application that handles a small area of expertice
each. For example we now have on thread handling phone synchronization events that allows
us to make the phones play in sync. And there are dedicated threads for sending and fetching
IP addresses and for network communication. To detect shake events we implemented an
algorithm which can detect shaking of the phone each 100 millisecond, and this enables us to
give the users more ways to play music with our app.
The user interface also got a completely new lock and feel. We added six icons to represent
the instruments. When one instrument is chosen the background color changes accordingly,
12
and the chosen icon wiggles on the screen to give running feedback on the chose option and
indicate what the user itself should do with the phone.
Screenshots from the third iteration of the prototype, with the master node prototype to the right
In addition to this application we developed a application that can be used on bigger terminals
as the galaxy tab. This application can send synchronisation signals to all the phones using
the other app and make them play synchronized. It can also be used to select different loops
that plays, start and stop the loops and adjust the tempo of the loops and the send rate of the
phones. This application also have a lot of colors to stimulate the user.
This application have a lot of potentials, it can be used to control so much more. For example
effects on the loops, or on the players instruments. It might control the volume on the different
instruments and loops and other areas that we haven't thought of yet.
Max/MSP
At this point the Max/MSP patch has also become quite complex. It can now play four different
loops, and six different instruments. The way the different instruments are played is at the time
quite simple, they are only playing notes on a scale, but we intent to make it able to play chords
and loops, and perhaps more sophisticated variations over the notes in each controlled chord
to make it feel more like improvisation. Also the patch synchronizes the input signals from the
phones with the playing loop, so it basically eliminates the use of a synchronization function on
the phones.
13
Screenshots from an early Max/MSP prototype
The phones are outputting sensor data at a given rate, often quite fast, and this rate is not
influenced by how the user move the phone when playing an instrument. To make the triggering
of notes in Max/MSP more musical and not too chaotic, we made Max/MSP trigger the notes at
quarter note intervals of the chosen beat. This was just a way of simplifying the synchronization
of notes to the beat, as there are possibly more interesting ways of doing this.
For sound generation, Max/MSP is just using the computer´s built-in midi device, which contains
a basic collection of musical instrument sounds. The quality of these sounds is unfortunately
not very good, but as long as we are on the prototyping stage it will suffice. At a later stage,
improving these sounds is just a matter of connecting a better sound module/midi synthesizer to
a midi port on the computer.
Technical Future Work
There are many ways we can improve on the sound generation and the way of controlling the
instruments. We will hopefully gain some new perspectives on this after user testing, but at the
time being, we already have some ideas. It may be nice to be able to control not only the pitch
of the notes played, but also control the rate of notes played, the volume of the instrument, and
possibly control effects, like a filter on the drum loop. On the control app we will mainly explore
what kind of effects we can control, and add support for these. The possibility of triggering
drum fills/breaks from the pad-application may make the underlying beat more dynamic and
interesting. We will also think of adding volume adjustment capabilities and a status view of
how many players there are and maybe what instruments they are playing.
We also hope to add the ability of the sound from one particular instrument being controlled,
coming out from the speaker on the phone that is playing that instrument. Such as, a person
playing the guitar from his phone will hear the guitar sound coming directly from his phone´s
speaker. In this way we imagine it to be more intuitive to map your own actions to the sound
generated.
14
On the phone app there are a lot of optimization to do. About one quarter of the code
are scheduled to be removed because of capabilities in Max/MSP that makes the code
unnecessary. The removal of this code will hopefully make the app less power hungry. At the
current time it loads the phone processor at about 75 percent while the app is not in focus. This
should be 0 percent, but may be due to the heavily use of threads.
We will also explore how to analyze data so we can capture gesture movements and use
gestures in the interaction. Today we only use the output from one axis to control each
instrument, but a combination of different sensors and axis is needed to identify more
sophisticated gestures. A gesture like moving the phone around in circles will for instance
produce output like sinus curves in two or more axes that are out of phases of each other.
Different gestures will produce different fingerprints like this in the sensor output that we need to
find a way to recognise by code and act upon.
Conceptual Future Work
Apart from the technical challenges there are several issues that need to be studied and tested
further in this project. By user testing we hope to reveal if our system actually supports cocreation. Will it be more natural for users to play with this application together with others
instead of alone? Will they feel the the joint collaboration in music making improves the
final result, or just make it more chaotic and unpleasant? Will the now stationary base of the
ensemble, the computer who runs Max/MSP, hinder the mobility of the other devices in a way
that hinders co-creation? Is the music produced good enough to motivate repeated use of the
system?
We hope the development and use of personas will help us identify different user groups needs
and motivations and how our music impro application may support these.
Since our application should be intuitive to use without requiring manuals or training, we need
to research which gestures and movement we should anticipate from the users, and how these
forms of interaction should best map the influence they give on the sound produced. If the user
are given the possibility to play an instrument we brand as guitar, should we anticipate that
the user will play this instrument like swinging the phone up and down like hitting the strings
at a real guitar, or should we anticipate something else? We hope to understand more of the
interaction part, as we will study how different people interact with the application to create
music.
And how should the choices the user are given be presented in the graphical user interface?
Should we use traditional instruments like guiding metaphors, where if you push a button
shaped like a guitar, you produce music that sounds like a real guitar? For us as musicians
this feels natural, and it may be a good educational concept for some kids, but it is possible
to argue that this violates some universal design principles in that it may exclude kids that do
not know how a guitar is supposed to be played by giving an unnecessary difference between
right and wrong way of playing based on cultural knowledge. Since the sound is electronically
15
synthesized anyway we should consider the possibility of using more abstract forms of
instruments with no right and wrong way of playing.
All in all we have come a long way technologically, but we have still lots of work to do on the
musical level as well as the conceptual.
References
About RHYME (2011), http://rhyme.no/?page_id=10, Accessed online March 17. 2011
Dekel, Amnon, Dekel, Gilly(2008) MogMI:Mobile Gesture Music Instrument. Mobile Music
Workshop´08.
Essl, Georg, Rohs, Michael,Wang, Ge, (2008) Developments and Challenges turning Mobile
Phones into Generic Music Perfomance Platforms. Mobile Music Workshop´08.
Garrett, Jesse James (2011) The Elements of User Experience: User-Centered Design for the
Web and Beyond. New Riders: Berkley.
Jensenius, Alexander-Refsum(2008) Some Challenges Related to Music and Movement in
Mobile Music Technology. Mobile Music Workshop´08.
Kangas, E. , Kinnunen, T. (2005) Applying user-centered design to mobile application
development, Communications of the ACM , Volume 48 Issue 7, July 2005 (http://
portal.acm.org/citation.cfm?id=1070866)
Mace, Ronald L, Graeme J. Hardy, Janie P Place (1991) Accessible Enivironments- Toward
Universal Design. The Center for Universal Design
Plos, A., Buisine, S (2006) Universal design for mobile phones: a case study , ACM.
Reactable (2011), www.reactable.com .Accessed online March 20. 2011
Robertson, P. (2001) Music and health. In: A. Dilani, Editor, Design and Health—The
Therapeutic Benefits of Design, Elanders Svenskt Tryck AB, Stockholm (2001).
Sharp, Helen, Preece, Jenny, Rogers, Yvonne(2007) Interaction Design- beyond humancomputer interaction. John Wiley & Sons
16