Student Report - Mixed Reality Lab

Student Report
Conference: ACE 2009 (Athens, Greece) (http://www.ace2009.org/)
Name: R. A. Nimesha Ranasinghe
Student No: HT080435L
Supervisor: Professor Adrian David Cheok
(KEIO-NUS CUTE Center and Mixed Reality Lab, National University of Singapore)
Day 1
After registration and opening we headed to the first keynote talk from the managing director of
Hellenic cosmos, the cultural center in Athens, Greece where ACE conference was held. He presented an
introduction to the birth place of democracy, Athens and Greece culture and history followed by some
works done by the cultural center, Hellenic cosmos. Given below are some of the links he provided for
more information,
http://ehw.gr
http://egeonet.gr/index_en.html
http://e-history.gr/en/index.html
http://www.fhw.gr/fhw/
NOTE: This time in ACE conference there were 11 full paper sessions and 8 short paper sessions.
According to the schedule some of the sessions were unfortunately overlapping and I could not be able
to attend to some of those sessions. Furthermore, I found there were no workshops in the conference.
Program: http://www.ace2009.org/index.php/program
Some of the good papers from the sessions I attended are cited below and full conference proceedings
you can find from the below link,
\\Lacie-w2to9hra6\newShare\ACE2009\ACE_Proceedings_09
After the first keynote session, I proceeded to the first Full Paper session titled, Tools for
Communication. The session chair was Professor Henry B.L. Duh from National University of Singapore.
(All the interesting papers and demos are stated as an Appendix)
Day 2
Second day mainly we had the poster and creative showcase sessions along with several paper sessions.
Ken and I did presentations for Petimo (Poster and Creative Showcase) and the Poetry Mix-up (Creative
Showcase) in the Fast Forward session (one minute for one project). It was a nice practice session for me
to make the 15 minutes presentation in day 3. After the lunch it was the Poster and Creative Showcase
core time. We setup and checked everything in the evening of day 1 (also we did test the systems in day
2 morning and day1 night as well) for our two demos in order to make it a smooth demo during the core
time.
I did the demo for Poetry Mix-Up system. The demo went really well and many people gave their
comments and enjoyed the system. Some of them are talking about attaching this kind of work for their
research as well.
Professor Junehwa Song from Korea (KAIST), A/Professor Lindsay Grace from USA (MIAMI University),
and Professor Hideaki Touyama from Japan (Toyama Prefectural University) are some of the people who
really enjoyed and commented on the system. Many of them were commented about the usage and
requested to send the generated poem back to their mobile phones. One student from University of
Bergen, Norway really enjoyed the twitter integration of the system and he tried several times with
several usernames. He commented that using this system now he can really express his ideas in a poetic
manner to his twitter friends. He further suggested developing a twitter application rather than
uploading every time. Another lady professor was talking about a Greek installation on this system in
Athens itself. Almost all of them took photos on their creations and really enjoyed the demo.
After the demos we rushed to the key note presentation for day 2 and missed the first part of the talk. It
was a great and inspirational talk by Professor Norbert A. Streitz, Cologne, Germany. He holds two PhD’s
(Ph.D. in physics, Ph.D. in psychology) and a Senior Scientist and Strategic Advisor with more than 25
years of experience in information and communication technology. He is the founder of the "Smart
Future Initiative" which was launched in January 2009. He discussed on people oriented, empowering
smartness and further about the future research trends as in below images.
Day 3
Day 3 Started from the keynote-talk by Régine Debatty. She writes about the intersection between art,
design and technology on her blog http://we-make-money-not-art.com/ as well as on several European
design and art magazines. She explained many examples of good and attractive art pieces and designs.
Further she speaks about her great interest on bio technology and art (BioArt). At the end audience
raised questions on art and technology to identify her viewpoint on that. Given below are some of the
examples she mentioned during the talk,
SlugBot: Enemy of Slugs - http://www.wired.com/gadgets/miscellaneous/news/2001/10/47156
A walking city - http://www.archigram.net/projects_pages/walking_city.html
Art + Com: Terravision http://www.artcom.de/index.php?lang=en&option=com_acprojects&id=5&Itemid=144&page=6
Sledgehammer keyboard - http://www.boingboing.net/2005/09/29/sledgehammerkeyboar.html
Hello World – Yunchul Kim - http://www.interactivearchitecture.org/hello-world-yunchulkim.html
Random assistant - http://www.joshuadavis.com/portfolio/random-assistant-lisbon/
LED eyelash - http://www.geeky-gadgets.com/led-eyelash-26-10-2009/
Gordon Pask (very early interactive artist) - http://en.wikipedia.org/wiki/Gordon_Pask
Wafaa Bilal - http://en.wikipedia.org/wiki/Wafaa_Bilal
After the key note talk we had another Creative Showcase session and then in the evening I did my
presentation on Poetry Mix-up short paper. It was my first international presentation and it seems well
accepted. Audience basically questioned about the user study and the impact of this work to the public,
mentioning is it really contributing to the poetry world. I answered them, we did not directly compete
with the poetry world but we are trying to popularize the old communication culture through poetry.
For all the images please check: \\Lacie-w2to9hra6\newShare\ACE2009\Nimesha
Appendix
Interesting demonstrations from Creative Showcase:
1. Multiplayer Pervasive Games and Networked Interactive Installations using Ad hoc Mobile Sensor
Networks
Orestis Akrivopoulos, Marios Logaras, Nikos Vasilakis, Panagiotis Kokkinos, Georgios Mylonas, Ioannis
Chatzigiannakis and Paul Spirakis
This work is based on Fun in Numbers platform (http://finn.cti.gr) and it has a set of implemented
multiplayer games and interactive installations. FinN allows the quick prototyping of applications that
utilize input from multiple physical sources (sensors and other means of interfacing), by offering a set of
programming templates and services, such as proximity, localization and synchronization, that hide the
underlying complexity.
2. Headbang Hero
Ricardo Nascimento, Tiago martins, Andreas Zingerle, Christa Sommerer, Laurent Mignonneau and Nuno
Correia
A very nice demo for rock band lovers, Headbang Hero is a music/dance videogame for testing and
improving your prowess at “headbanging”. As you can see in the image the player wears a wireless
motion-sensing wig and is awarded points for her personal choreography as she shakes her head to the
sound of a heavy-metal song.
3. Yaminabe YAMMY: An interactive cooking pot that uses feeling as spices (check for Kitchen Project)
Izumi Yagi, Yu Ebihara, Tamaki Inada, Yoshiki Tanaka, Maki Sugimoto, Masahiko Inami, Adrian David
Cheok, Naohito Okude and Masahiko Inakage
Another nice demo from KEIO University Japan is the interactive cooking idea. “Yaminabe YAMMY” is an
interactive cooking pot which provides a new way of eating and sharing our memories and feelings. The
feelings extracted from the contents of an email, associated with a photo will be interpreted into
different “spices” which will then be sprinkled into the pot to alter the food's flavor. Further, they have
an iPhone application to extract the feelings.
4. Story Tube “sto-tu”
Hiroko Uchiyama, Akiko Sato, Mai Takai, Mina Shibasaki, Masahiro Ookura, Yuki Takeda, Takenori Hara,
Mina Tanaka and Shigeru Komatsubara
As can be seen in the image, this is a new approach for storytelling. It is a combination of a"trompel'oeil" image and AR technology which allows a user to experience and enjoy multi-media content in an
unprecedented manner. System generates computer graphics and video taken from a camera which is
inside a tube. The user can enjoy the story by moving the camera back and forth inside the tube. In this
demo they use one of the Japanese fantasies as the story inside the tube.
Below stated are some of the papers I found interesting:
1. RoCoS: Room-based Communication System and Its Aspect as Development Tool for 3D
Entertainment Applications
David Wilfinger, Martin Murer, Michael Lankes, Manfred Tscheligi
The system allows multiple users to communicate with each other through their virtual 3D spaces called
rooms located on the Internet. System has three main subsystems named communication, avatar
editing, and room editing. I noted the room editing and avatar editing systems has a great potential and
especially room editing tool has new interactions such as play a video in a television (as they
demonstrated during the presentation).
2. An Interactive Support Tool to Convey the Intended Message in Asynchronous Presentations
Andrés Lucero, Dzmitry Aliakseyeu, Kees Overbeeke, Jean-Bernard Martens
This system is an interactive wall mounted display which can be used as a presentation tool. As the
presenter described they have used a user centered approach to create the presentation tool to break
the traditional linearity of a ppt presentations. The tool organized three information layers, speech,
gestures, and visuals. Hands are tracking by an ultrasonic tracking system. Finally people can comment
or replay the presentation using the system.
3. RoboTable: A Tabletop Framework for Tangible Interaction with Robots in a Mixed Reality
Aleksander Krzywinski, Haipeng Mi, Weiqin Chen, Masanori Sugimoto
RoboTable allows users to naturally and intuitively manipulate robots on an interactive table top system.
The goal of this research is to develop a software framework for human-robot interaction which
combines table top, tangible objects, artificial intelligence and physics simulations and demonstrate the
framework with game applications. Their main argument is to enhance the table top interactions with
these robots in order to facilitate/teach different disciplines such as mathematics and physics and
different levels of education, while also learn to think creatively, reason systematically, and work
collaboratively. For physical simulation they used Box2D, http://www.box2d.org/ and for marker
tracking they used Reactivision library, http://reactivision.sourceforge.net/.
4. Wearable Haptic Device to Present Contact Sensation Based on Cutaneous Sensation Using Thin
Wires (good for James)
Takafumi Aoki, Hironori Mitake, Keoki Danial, Shoichi Hasegawa, Makoto Sato
This was selected as the best paper in this conference, a work done by Tokyo institute of technology,
Japan. It is a fingertip-mounted type haptic device to present haptic feedback for mixed reality
environments with mobile devices. The device could be able to present contact sensation to cutaneous
sensation using thin wire to fulfill three required technical specifications; the weight must be lightweight
(1.4g), has to have a fast response, and should have few obstacles on the fingertip abdomen. Their
ultimate target is to enables new entertainment expansion such as users will be able to touch CG
characters directly using their fingers.
I found some good related works from their paper as stated below,
Virtual Brownies: http://rogiken.org/vr/english.html
Gravity Grabber: http://tachilab.org/modules/projects/gravitygrabber.html
5. Novel Tactile Display for Emotional Tactile Experience
Yuki Hashimoto, Satsuki Nakata, Hiroyuki Kajimoto
This system consists of one or two speakers to achieve a novel tactile display that presents high-fidelity
tactile information through a very wide frequency bandwidth. Users hold the speakers between their
hands while the speakers vibrate the air between the speakers and their palms. The user feels suction or
pushing pressure on their palms from the air. Due to the very wide frequency range (1 Hz and below to 1
kHz and above), users can feel a variety of sensations. After the presentation I tried the system and it is
a very nice work and idea.
6. Wearable DJ System: a New Motion-Controlled DJ System
Aleksander Krzywinski, Haipeng Mi, Weiqin Chen, Masanori Sugimoto
The system uses wearable computing and gesture recognition technologies to perform a DJ. The DJ
techniques are executed by performing intuitive gesture operations using wearable acceleration
sensors. They did a demo while the presentation and it seems a nice work as well. Furthermore, the
accuracy of the system was evaluated and have confirmed its effectiveness.