the word file here!

University of Limerick
Contemporary Art in the Public Realm
Outdoor Installation
Group Members
Yang Jieling, Geoff Carmody, Fang Lu, Michael Bourke, David Paul & Lukasz Kotowski
(There will be an update of this document soon)
Brainstorming
We began the process of initial idea creation by the use of brainstorming technique. This is, the
whole team got together and developed every wild and crazy idea that came to mind. Over a
period of time these ideas were whittled down to realistic, interesting and accomplishable ideas.
What previous ideas we had:
In the beginning of this project, we had an array of ideas for the exterior installation. Ideas such as
the usage of a piece huge paper in part of a road which covered some chalk or ink releasing type
items. As people crossed this paper, they would have stepped onto it unaware that they were
creating art underneath. After a while, when we would turn over the paper revealing the separate
element of the installation, the marks and footprints left behind by the users. Other ideas included
wind powered charging stations for phones, rain collecting musical art sculptures all the way to
creating an “art day” in which we show the people of UL that anybody is capable of creating art.
Ultimately, we decided upon creating a play on the already existing self portrait gallery housed
inside the main building in UL. Spawning from the idea:
“Why should a person be able to hang their face in a gallery just because they may
have drawn it?”
We set out envisioning ways we could use this gallery to our advantage whilst also getting the
students, staff and visitors of UL interested in the piece. After many discussions, some hair pulling
and hard work we finally came up with a concept that would come to be known as “PORTRAYED “
Development of our installation
Wednesday 02 / 04 / 2014
This was the first meeting. It took place in the Cafe downstairs. We divided the work, Geoff
wanted to implement the graphic design for the piece, Yang was to borrow equipment from Colm,
Lukasz and David were to create the music technology side of the installation. Dr. Fernstrom
helped contact Mary in order to gain permission to place our installation in her café by the library
in UL. We discussed how with use camera when interacting with people. Methods discussed were
to possibly deform people’s faces or to use sound to generate notes as people crossed the camera.
Wednesday 09 / 04 / 2014
We made an initial plan for our installation which used the pixels of an image to trigger sound as
people show their face in front of a camera. The Max patch would transfer the image’s pixels into
notes in order to generate music. Some problems were encountered such as thousands of pixels
for each imagine creating too much data, how those countless pixels would be able to translate
into music notes and how to attract passers by as they show up on said camera. We then
discussed how to use a camera to take a snapshot of the user as they get close to the camera and
use sound to attract people as people cross this installation threshold.
Geoff & Yang investigated online for MAX MSP help that would be familiar to our installation. As
they try to understand how people interact with camera via Max, they encountered some
problems. The camera we installed doesn’t show anything on the max panel; Lukasz helped
choose a different option in Max video dialogue solving the prior issue.
David & Lukasz worked at building a Max patch for the sound section of the piece. They built
different models of music instruments and sounds for the installation.
We then figured out how to install or place all the items needed in the Cafe, this included how to
place the camera, mirror and determine the optimum distance between the user and the camera.
Tuesday 22 / 04 / 2014
Geoff and Yang tried to figure out how to capture people’s face by using Max patches. During the
work, they encounter problems such as the figure or number was unstable in the patch they built
thus, it is difficult for to take photos. It was concluded that Arduino would have to be introduced
to the project in order to enable the photo capturing procedure.
They analyse all the objects of max patch which deal with the principles of changeable numbers.
They assumed that the numbers were detecting colour, they then use different coloured items to
test how the camera detects these colours. The idea that the camera detects colour was wrong
and it is instead detecting light. We used a mobile phone’s light to illuminate the camera in order
to ensure that the numbers were changed.
After figuring out how the camera linked with the MAX patch worked they found it was hard to
figure out when the computer was to take a photo. The MAX patch can only take a photo if the
user is extremely close to the camera potentially obscuring any form of image that could be
captured. For this problem it was suggested that maybe taking photos manually would be a better
option. When people step in front of the camera, we click the keyboard of the computer to take
photo.
According to this advice, keywords were searched: “Capture webcam still image max msp”. It was
quickly understood that if we wanted to capture am image we need to convert the format of video
into quick time. The “jit. grab” object in Max was used to capture images. Before we capture a
photo, we need to click the “open” button in Max and select “export image”. But there is a
problem remains: how can we save those images automatically, should we need to name each
image after each capture?
Thursday 24 / 04 / 2014
We still want Max to enable a photo taking function through the camera automatically, and name
the picture by itself as well. We thought we should use sensors to control Max which means using
the Arduino kit to connect Max, the webcam and Arduino together.
Dr. Fernstrom suggests, we search the Arduino homepage on Google. We clicked on “learning”
option and choose “play guide” then select “Arduino to Max” option. There are some methods on
the website about how to connect Arduino with Max. We find some examples on the website to
study with. Dr. Fernstrom also suggested making this installation more interesting; by using hand
gestures in order to take photos.
For the sound section, David and Lucas were not sure about whether we should allow music to
play continuously or let it play as long as people are close enough to the camera. Another problem
they encountered was that the camera lens was not sensitive enough. This meant people had to
step very close to the camera and only then would the monitor display people’s face clearly and
precisely. Dr. Fernstrom gave us a high-quality web camera to borrow but when tested in class the
problem still arose. People still need to be close to the camera to take a photo. It was thought that
maybe the use of a sensor with some code to force the lens zoom in and zoom out automatically
based on the detecting of light may be needed.
From Saturday to Wednesday: 05 / 05 / 2014 to 07 / 05 / 2014
Prototype, Testing and Enhancement
We tested the installation in our studio, it worked well but we didn’t enjoy the sound generated
by Max patch. It was quite noisy as it played without any tones and pitches. David and Lukasz
enhanced the sound section by adjusting the numbers and values on the Max patch whilst Yang
and other members continued testing with the camera and discussing which sound was better.
Finally we decided to use the sounds that mimic a concert. We then brought our computer to the
restaurant downstairs and placed our equipment inside. Then we make faces in front of camera
from outside (there is a glass wall partition between us), other members then adjust the volume of
the sound and the patch values in order to create the optimum presentation settings. During our
testing we also took a video and photos in case our Max patch runs improperly later on.
Technologies used and why
Max:
Max was used to capture images coming from the attached webcam. It does this by feeding the
display into a jit.window that can then be displayed to the end user in presentation mode. The
ability to use Max to do this allowed us to determine when someone had stepped in front of the
camera. When it detects this movement it outputs values based on movement and distance which
then feeds into the sound generator
Generating sound through Max allows the project to come alive as the user plays with the sounds
created. They themselves have a form of random control over how the system reacts to them as
they enter the frame. Each person will create different sounds when they stand in front of the
webcam.
Arduino kits:
The Arduino kits were used in order to allow people to take a snapshot of their face and add it to
our new and improved electronic self-portrait gallery. The items used were electric resistance,
light sensor, wires and USB cables and power source. The Arduino board itself is housed in a 3d
printed box to allow for compact and tidy usage .
How it went down with the people
Most of the participants were wary about taking a photo. They seemed bemused by
the sound as they moved towards the screen. The participants started to take more
photos when the group used the monitor under the camera to show a slide show of
photos taken. We never anticipated that people would be so shy. Maybe they were
paranoid that their self-portraits would end up being digitally manipulated. Unlike
taking a selfie, this is an image that they do not have control over, hence the
reticence. It could also be disseminated throughout the internet. There might have
been more enthusiasm if we called the project, “Selfie”, perhaps. Some participants
were also intimidated by the strange sound from the speakers. Of course, there
were plenty of people who participated and took photos and seemed to enjoy the
project also.
There were certain problems in the implementation that we had to tweak. The
numbers on the patch were increased and decreased to manipulate camera
sensitivity. We were also struggling to make the sound more musical. On the day
before we first displayed the project, the sound was more musical, like a horn
blowing. Yet on the day we ran the project, it was not quite right but we managed to
make grungier and abrasive sounding as the day went on.