ACM CHI 2016 - Cornell Computer Science

RAMPARTS: Supporting Sensemaking with Spatially-Aware
Mobile Interactions
Paweł Woźniak1∗ , Nitesh Goyal2∗
Przemysław Kucharski3 , Lars Lischke4 , Sven Mayer4 , Morten Fjeld1
1
t2i, Chalmers University of Technology, Gothenburg, Sweden, pawelw,[email protected]
2
Cornell University, Ithaca, United States, [email protected]
3
Lodz University of Technology, Łódź, Poland, [email protected]
4
VIS, University of Stuttgart, Stuttgart, Germany, {firstname.lastname}@vis.uni-stuttgart.de
ABSTRACT
Synchronous colocated collaborative sensemaking requires
that analysts share their information and insights with each
other. The challenge is to know when is the right time to
share what information without disrupting the present state of
analysis. This is crucial in ad-hoc sensemaking sessions with
mobile devices because small screen space limits information
display. To address these tensions, we propose and evaluate
RAMPARTS—a spatially aware sensemaking system for collaborative crime analysis that aims to support faster information sharing, clue-finding, and analysis. We compare RAMPARTS to an interactive tabletop and a paper-based method
in a controlled laboratory study. We found that RAMPARTS
significantly decreased task completion time compared to paper, without affecting cognitive load or task completion time
adversely compared to an interactive tabletop. We conclude
that designing for ad-hoc colocated sensemaking on mobile
devices could benefit from spatial awareness. In particular,
spatial awareness could be used to identify relevant information, support diverse alignment styles for visual comparison,
and enable alternative rhythms of sensemaking.
Author Keywords
spatial awareness; multi-device system; sensemaking
ACM Classification Keywords
H.5.m. Information Interfaces and Presentation (e.g. HCI):
Miscellaneous
INTRODUCTION
Post 9/11, US Department of Justice released the National
Criminal Intelligence Plan in October 2003 [21], suggesting
better information sharing through colocated interaction between analysts as a measure to promote equal data access and
support sensemaking. The plan outlined these measures because sharing privately held data between analysts for crimesolving has so far been a challenge [18, 17, 16]. The plan also
∗
P. Woźniak and N. Goyal are joint first authors and contributed equally to this work.
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than
ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from [email protected].
CHI’16, May 07–12, 2016, San Jose, CA, USA
©2016 ACM. ISBN 978-1-4503-3362-7/16/05...$15.00
DOI: http://dx.doi.org/10.1145/2858036.2858491
Figure 1. An overview of the RAMPARTS system design, which enables
multiple users to collaborate on a sensemaking task using multiple mobile devices which use spatial-awareness to seamlessly share and identify
relevant information.
called for development of technology that could mediate and
support sensemaking for collaborators meeting ad-hoc with
newer technologies like mobile devices. Since then, multiple
design solutions like interactive tabletops [61], implicit sharing between collaborators [17, 19], visualizations to reduce
biases [16], and ad-hoc connected spatially aware mobile devices for information sharing [52, 53, 64] have been pursued.
We believe that providing spatial awareness for mobile devices can enhance capabilities for data exploration of private
data shared during sensemaking tasks, which has been rarely
explored.
Unlike mobile devices, high-end expensive solutions like
tabletops are not always available and are even lesser preferred for collaborative sensemaking [5]. As users increasingly carry multiple mobile devices like tablets or smartphones [67], ad-hoc meetings with collaborators now can
benefit from these multiple avenues to forage for and share
information in real time. Device ethnography studies [34,
54] have shown that users are indeed seeking new ways to
forage, share and perform collaborative sensemaking using
mobile devices. While cross-device interaction across multiple devices has been pursued [4, 23, 38, 39, 41, 52], limited
progress has been made towards designing for spatial awareness between such devices [52, 53]. Colocated mobile devices have primarily been viewed as independent entities that
are agnostic to the spatial location of other devices, rendering
such collaborations tedious and non-optimal [20, 47]. Further, past work on sensemaking over multiple mobile devices
has offered limited evaluation for single user scenarios [23].
On the contrary, this work pursues designing for sensemaking in ad-hoc collaborations where multiple spatially-aware
mobile devices can now share private information easily.
While recent work in spatially aware solutions for colocated
mobile devices like HuddleLamp [52] have been pursued,
further research is needed to understand when spatial-aware
interactions would be beneficial over the others [53]. To
the best of our knowledge, RAMPARTS is the first spatially
aware mobile device based system that has been evaluated
to test its effectiveness in a sensemaking task. RAMPARTS
uses commonplace, consumer hardware like multiple tablets
to build an ad-hoc environment for colocated collaboration
by supporting spatial information arrangement to forage for
and connect data, important for sensemaking tasks like crimesolving. We evaluated RAMPARTS using a study where 27
pairs solved a crime mystery, against two popular alternatives: an interactive tabletop and a paper-based method. Our
study showed that RAMPARTS reduced task completion time
(TCT) significantly against Paper, and did not significantly affect TCT as compared to T yet offered the advantages of an
inexpensive multi-device solution. We present design insights
for ad-hoc multidevice systems, spatially aware interactions,
and mobile collaborative sensemaking.
The paper makes the following contributions:
(1) Design and implementation of RAMPARTS, a research
prototype which simulates an environment for spatially aware
colocated collaborative sensemaking by focusing on interaction mechanisms for information sharing and analysis.
(2) User evaluation of RAMPARTS involving 27 pairs and
comparison with two alternatives (tabletop & paper-based
method) in a sensemaking task. Based on the evaluation, we
suggest implications for the future design of spatially aware
mobile multidevice systems for collaborative sensemaking.
Existing tools are impractical and/or do not provide spatial
awareness. With inexpensive spatially aware RAMPARTS,
students could collaborate to solve assignments. In crimesolving, investigators collect evidence in the field and hold
that data privately in their mobile devices. So, RAMPARTS
could enable them to share and analyze data in field by providing an easy, accessible way to do sensemaking using technology they always have in hand anyway.
RELATED WORK
RAMPARTS is a step towards building a new wave of tools
for enhanced sensemaking in applications like data governance [63], citizen participation [11], and crime solving [18]
where multiple users could bring their devices [2], interact
with the information. We begin by unpacking previous work
in sensemaking and situate RAMPARTS.
Sensemaking in Collaborative Analysis
As described by Pirolli and Card [51], sensemaking is an iterative process of foraging for information. Analysts iteratively forage clues and generate mental models that explain
how these clues connect together coherently until they have
found the solution [31]. Similarly, crime-Investigators are required to parse data, identify relevant clues from non-relevant
ones, and identify a motive, location, and time of a crime that
would incriminate the correct person [13].
While collaboration with other analysts who have access to
rest of the clues can be advantageous because sharing solves
otherwise unsolvable crimes [17, 27, 66].It has been found
that sharing information and building upon shared information requires technology that supports making sharing and
analysis easy for teams [31]. On the one hand successful
collaborative sensemaking systems have to enable multiple
collaborators to perform joint analysis, but on the other hand
collaborators should be able to introduce pertinent privately
held information, without disrupting ongoing analysis. We
next discuss the tools for collaborative sensemaking and how
they tackle these challenges.
Tools that support collaborative sensemaking
Shared workspaces have been shown to improve shared understanding and awareness [12] by improving informationsharing [27], improving common ground [66] and increasing
partner awareness [7, 51]. Shared workspaces have also included creating collaborative visualizations [1, 6, 28, 32, 33,
59, 62], reminding analysts to view their partners analysis, as
in AnalyticStream [49], and recommending relevant pieces of
information from their partner [3] to increase awareness and
reach common ground.
While both explicit sharing as suggested in these tools, and
more recently, implicit sharing [17, 16] have been pursued,
sensemaking has been pursued primarily using spatially agnostic devices [23]. Most of these tools do not enable ad-hoc
introduction of spatially aware personal mobile devices for
joint sensemaking sessions. Next we discuss why spatially
aware interactions are important for sensemaking and current
state-of-the-art.
Multi-device environments
Our work is also inspired by past research in how interaction
mechanisms work between multi-display environments, particularly between larger stationary displays and mobile devices. The LunchTable [44] used multiple semi-public displays to create a casual discussion space in a lunch room.
Wallace et al. [60] and [29] built systems to support sensemaking on a shared public interactive tabletop, additionally
augmented by multiple user’s private mobile devices. Lucero
et al. [39] proposed a system for information sharing based on
proximity through gestures. Further, EasyGroups [40] suggested other ways to couple smartphones to facilitate collaboration using a docking metaphor. Alternatively, Haber et
al. [22] showed multiple tablets for reading and annotating
text by splitting text across multiple displays. Similarly, Conductor [23] has been shown to facilitate sensemaking using
highlighting to enable easy transfer of content between multiple devices. Past work shows that multiple mobile devices are
well suited for a number of tasks in collaborative settings like
enabling sharing through proximity, splitting content across
multiple devices for easier reading, and highlighting important content to be transferred across devices.
Spatially-aware interactions
Using the space between and around multiple devices as an
extended interaction space, as well as using the relative distance between devices, has been receiving increasing attention in Human-Computer Interaction (HCI) literature. Several
systems have been developed as non-mobile specialised environments for data analysis. For example, Spindler et al. [56]
built a system where multiple spatially aware tablets could enable data exploration on an interactive tabletop using a virtual
aquarium metaphor that can be explored using a magic lens.
Another system by Spindler et al. [57] explored how depth
sensing and top-down projection can augment environments
to provide richer interaction with data using a Kinect sensor
to detect hand gestures. More recently, Rädle et al. [53] have
demonstrated that users do indeed show a preference for spatially aware interactions when performing gestures.
Spatial-awareness itself has also been found to improve interaction between mobile devices. HuddleLamp [52] showed
that an inexpensive above-table sensor can be used to enable
spatially aware navigation. Further, Thaddeus [64] has illustrated how interacting with information visualisation can be
enhanced through spatial awareness using pointing with mobile devices. AdBinning [26] has also demonstrated that a
mobile device can facilitate storing and managing map information when interacting with space around a device that acts
like a bin. Alternatively, Piazza et al. [50] used a Microsoft
PixelSense to simulate an environment where spatially aware
tablets and smartphones together create a tool for drawing,
reading, and gaming. As is evident, peripheral space has been
researched for variety of tasks.
Our design and study is based on findings from [52] which
showed that spatial-awareness is technically possible with
low-cost sensing; while Rädle et al. [53] investigated short
single-user tasks and focused on gesture design instead; and
Hamilton et al. [23] focused only on the implementation of
cross-device interaction in a limited small-scale study. While
prior work shows that spatial awareness is beneficial to interaction with data and helps build effective information patterns, it remains unexplored as to how such systems might
support a sensemaking task itself, and further to evaluate their
use for sensemaking. In contrast, our work compares the impact of spatial awareness with tabletop and paper on a collaborative sensemaking task. To the best of our knowledge, we
are the first to show low-cost spatially-aware mobile devices
compared to interactive tabletop and paper-based solution in
a specific design context.
Research in interactive tabletops has also shown that stationary horizontal interactive surfaces provide good opportunities for collaboration and enable multiple colocated users to
work efficiently [30, 55, 31]. Given the increasing quantity
of data being generated every day and the increasing need for
data governance [63], tabletops have been suggested to provide a viable solution for collaborative sensemaking tasks like
civic engagement requiring information gathering and parsing [11]. However, spatially aware devices placed together
on a horizontal surface could potentially offer a better alternative for such collaborations and sensemaking [40] because
users have reported spatially aware devices [53] to be more
beneficial than spatially agnostic devices. There are three advantages to this strategy over an interactive tabletop: First, expensive and bulky interactive tabletops might not be always
available or preferred [5]. Second, one could use both the
physical artefacts lying on a table and the digital artefacts on
mobile devices without occluding data. Third, users can conveniently share private information on their mobile devices as
opposed to a single interactive display/surface. Finally, multidevice environments are likely to be much cheaper and more
ubiquitous than tabletop computers for the foreseeable future.
DESIGN
RAMPARTS was designed taking into account the lessons
learned from research discussed above in collaborative sensemaking and spatially-aware mobile devices. RAMPARTS focuses on providing functionalities suited for a well-defined
task—a crime-solving exercise that requires information
sharing across multiple devices in an ad-hoc meeting to identify a hidden profile [58]. Users may choose to spatially arrange data on shared surfaces in multiple ways, ranging from
sharing entire datasets to sharing their insights alone. As suggested by Keel [36], arranging information in space supports
cognition and additional information can be inferred from the
way users arrange data on surfaces. One popular way of arranging data on surfaces in sensemaking tasks has been to
write on sticky notes.
RAMPARTS was developed over multiple iterations. We describe below results from a preliminary study, and subsequent
fully functional prototype details. In the study, we observed
the process and outcome of sensemaking when users tried to
parse clues, identifying a hidden profile and creating a story
that supported their finding. We used low-fidelity paper sticky
notes to understand their use of space, and analysed patterns
as users collaborated with each other to move the relevant
notes together into clusters for parsing the information.
Study task
For the study, we ran an exercise with multiple collaborators
in a complex crime-solving task, like the murder mystery suggested by Stanford et al. [58]. Collaborators are given 31
clues, each on a sticky note, and have to identify the name of
the criminal, and the location, time, and motive of the crime.
Similar crime-solving tasks have been pursued to better understand how to design collaborative sensemaking [35, 18,
17, 16]. In the task, 5 people were involved in a murder mystery that took place between 11.30 pm and 1.30 am, where the
criminal had to be identified within 25 minutes.
This task was made more complex by adding non-relevant information aimed to confuse the participants. The main challenge in the task was the fact that the victim (Mr. Kelley)
Alignment: Secondly, four participants would align the clues
in a grid or along an axis to get an overview of one of the
dimensions of the story. One participant constructed a twodimensional grid where events were sorted chronologically
on the horizontal axis and sorted by suspect names on the
vertical axis.
Figure 2. A participant in the preliminary paper-based study solving a
crime mystery. The user identifies relevant clues and aligns them vertically as a support strategy.
was both shot and stabbed on the night of the crime. Participants had to understand the content in depth to separate the
side plot from the main story. Separating the multiple plots
enabled them to determine who wounded whom and which
of the events was relevant. The solution, as given by Stanford
et al. [58] was: “After receiving a superficial gunshot wound
from Mr. Jones, Mr. Kelley went to Mr. Scotts apartment
where Mr. Scott killed him with a knife at 12:30 AM because
Mr. Scott was in love with Mr. Kelleys wife.”.
Using a crime-solving task, in a preliminary study similar to
Fisher et al. [10] we observed users employing strategies to
spatially organise information, as described below.
Preliminary study
We conducted a preliminary study with five participants (3
male, 35–52 years old) to participate in an approximately
half-hour study. These participants were recruited through
snowball sampling and were peer researchers and students
who performed the task voluntarily. After the study, participants were thanked and compensated with lunch. During the
study, they were asked to solve a crime mystery using clues
printed on strips of paper (shown in Figure 2). We asked the
participants to think aloud (which has been shown to generate rich data without affecting their cognitive processes [8,
46] and describe their approach to the solution. All participants used some spatial strategies to organise the information.
These could be classified primarily into the following themes.
Relevance: Firstly, grouping information, i.e. creating spatially separated clusters of paper strips, was the most often
applied solution. Users would sort the notes several times
based on different criteria e.g. selecting notes containing a
particular name or place. One participant tore the answer
sheet to create markers showing the centres of the different
groups. Users also expressed their wish to mark the identified
relationships between the clues (e.g. two clues containing the
same name) in some way:
I would like to be able to mark that Mr. Jones is on all
of these strips. There’s quite a lot of these clues.
Overall, results of our informal preliminary study were
promising, suggesting that spatial arrangement of information contributed by multiple collaborators, referred to as
alignment could be useful for identifying relationships represented by clusters, and relevance represented by highlighting. These results extend previous work by Fisher et al. [10]
who found that despite the distributed nature of sensemaking, collaborators found organizational structure generated by
previous collaborators to be highly useful for sensemaking.
Further, we also found that simple paper sticky-notes could
support spatial layouts and promote the development of rich
mental schemas. We next examine these affordances by using digital sticky-notes that could afford similar interaction
mechanisms by designing and evaluating RAMPARTS.
The RAMPARTS system
Through our preliminary study and using previous similar
work [10], we noted that spatial awareness enriches the interaction when clues are distributed across collaborators. Users
used the spatial relationships between the textual clues to support their sensemaking process. Consequently, RAMPARTS
is built to support two design principles, Relevance and Alignment. We augmented RAMPARTS with features specific to
the crime-solving hidden profile task. Here, we present the
final design of the prototype. The system enables putting
any number of mobile devices owned by collaborators on a
horizontal surface and using them together to perform joint
analysis. Multiple users share a common interaction space
and collaborate by manipulating the clues, displayed as digital sticky notes, on and between these mobile devices. Such
digital sticky notes have been shown to support externalizing
and spatially organizing insights during collaborative sensemaking tasks previously [18, 17, 16].
RAMPARTS displays the clues available to the user in the
form of ”digital sticky notes” and enables a user to manipulate
them freely. Sticky notes are fixed on tablet surfaces. This
way, users can arrange pieces of information as they choose,
both by using touch to move them on the device screens and
using gestures to move them between devices (as suggested
by Rädle et al. [53]). Users could also physically move the devices to move multiple sticky-notes simultaneously. A swipe
(flick) gesture is used to transfer a sticky note to a neighbouring device. The direction of the swipe determines to which
device the note is transferred. In order to simplify the interface, a note is always moved to the nearest device in the swipe
direction. This mechanism is illustrated in Figure 3a. Swiping worked in six degree of freedom (6DoF) e.g. hovering a
device above another device allowed users to “drop” a sticky
note to the device below or ”throw” a note up, just like how
one would expect a physical sticky-note to perform.
(a) Swiping to transfer a sticky note to another
tablet. RAMPARTS employs spatially-aware
gestures to move notes between devices.
(b) Highltighting colours notes on all devices
containing the same information as chosen by
the user with a long tap.
(c) The timeline display, with a long tap on a
date and time clues containing time information are highlighted and connected chronologically with lines across multiple devices.
Figure 3. Three interaction patterns in RAMPARTS. (a) shows moving sticky notes between tablets. The two sensemaking support features in RAMPARTS for solving the logical puzzle are shown in (b) and (c). The dotted lines are not displayed and the solid lines are visible in timeline mode.
As a response to the strategies participants employed in the
paper-based preliminary study, we developed two interaction
patterns that provided additional support for sensemaking.
Relevance: First, multicolour highlighting based on content
helped users identify the distribution of people, objects, and
locations among the clues. Our users were able to select a
given piece of information (e.g. a name) to highlight all other
sticky notes containing that information. The “origin” sticky
note was then highlighted in a different colour. The feature
was activated with a long tap on the information piece, in
order to differentiate between this action and repositioning a
sticky note, see Figure 3b.
Alignment: We created the timeline display inspired by how
users arranged content along a chronological axis. We observed that the lack of additional visual aids made users employ complex grids. Consequently, we designed a help tool
for aligning clues chronologically. Activated by a long tap
on a time expression (e.g. a date and/or time), RAMPARTS
highlighted all the other notes containing time expressions
and connected them with lines in a chronological order as
shown in Figure 3c.
Both the highlighting, and timelines spanned between multiple devices thus showing the continuity of a single multidisplay interaction space. The display was updated when
the devices were moved or sticky notes repositioned so that
the users could benefit from the chronological ordering and
clustering simultaneously. Having identified Relevance and
Alignment as RAMPARTS’s two interaction mechanisms, we
now discuss the implementation details for RAMPARTS.
IMPLEMENTATION
RAMPARTS consists of three main components: the motion
tracking system, the RAMPARTS mobile application, and the
coordination server.
As our work focuses on designing and understanding interaction for spatially aware environments, we chose the
most accurate method of acquiring positional information
available—infrared marker-bases 6DoF motion tracking. Past
work [52] and current commercial developments have shown
that embedded accurate mutual positional sensing will soon
be available for commercial smartphone models. Consequently, we assume that a marker-based solution accurately
approximates the predictable future technical landscape. This
assumption is also a foundation of past work e.g. [26].
We used Qualisys Oqus technology to acquire positional information. The devices are identified in the system and
tracked as rigid bodies in 6DoF. The Qualisys Track Manager
(QTM) Real Time Server provides tracking information (i.e.
the position and orientation of all the devices) and streams it
over a network protocol. The positional information is transmitted over a wireless network to the RAMPARTS server. In
order to provide accurate tracking even when occlusions occur, we use a ceiling-mounted system with 8 cameras.
RAMPARTS is implemented as an application for the Android mobile operating system. The application needs to be
installed on each of the devices in order for them to be part
of the system. The RAMPARTS interface is adaptable to
screen size. Functionality of the application includes displaying sticky notes, moving them within a tablet, throwing them
to a different tablet, activating the highlight and activating the
timeline. All the spatially-aware interactions on the devices
are prompted by the server. Application can receive signals
to display a new sticky note, to highlight a sticky notes or display the timeline. In case of the timeline, the coordinates for
drawing the chronological lines are provided by the server.
The RAMPARTS server processes the positional information received from the QTM server to determine the desired
behaviour of the mobile devices based on their spatial arrangement. It also acts as an intermediary for communicating user interactions between devices. In order to complete
these tasks, the server stores and updates the positions of all
the devices and sticky notes in the global coordinate system
(i.e. the coordinate system of the motion tracking data). All
devices receive commands from the server. The server notifies devices about: interaction with other devices, the need
to display new content and activating the sensemaking support features. The devices also report any user input to the
server and server replies with the appropriate response. It
is capable of simultaneously responding to events occurring
on all the devices. When a user performs the swipe action
to throw a sticky note to another device, the original device
provides the coordinates of the sticky note and the swiping
vector in the device’s local coordinate system. The vector is
then transformed to the global coordinate system using the
device size and the Euler angles provided by motion tracking. Virtual lines are connecting the original device with all
other devices are then created. The line closest to the vector
is identified to choose the target device. Boundary conditions
prevent swiping into spaces where no device is present. Similarly, to display the timeline, each consecutive pair of chronologically ordered sticky notes is connected with a line in the
global coordinate system. Next, the visible parts of the lines
are recalculated to the local coordinate systems of the devices
and a line drawing command is sent.
HYPOTHESIS AND RESEARCH QUESTION
We adopted the following hypothesis for the study:
H: Due to sensemaking features, RAMPARTS (R) will
shorten task completion time in comparison to paper (P).
As prior work has shown that digital tools can support sensemaking effectively, we expected that the enhanced sensemaking support features of R would allow for solving the crime
mystery faster than when using analogue tools. Additionally,
we investigated one research question:
RQ: How well does low-cost R perform compared to a highend tabletop (T)?
As no past work explored spatial-awareness for sensemaking,
but several sources confirmed the effectiveness, we endeavoured to explore how the low-cost ad-hoc R would compare
to the an expensive tabletop system.
EVALUATION
This section describes the study we performed to evaluate
RAMPARTS to better understand the cost vs. benefit ratio
analysis of enabling such a system over traditional interactive tabletops—as well as paper—in terms of accuracy, efficiency, and team-experience. In a between-subjects study
we compared RAMPARTS (condition R) to two systems, a
tabletop system (condition T) and a paper-based solution with
sticky notes (condition P). Participants were asked to solve
the crime solving task we used in the preliminary study.
Participants
A total of 54 participants (38 male, 16 female, aged 18–61,
M = 28.81, S D = 10.16) completed the study. The participants were recruited by word of mouth at the university campus. Remuneration was provided in the form of a small gift of
choice selected from a gift box (maximum value 20 USD). 60
Users were recruited individually and 30 pairs were matched
through matching schedules and preferred study times. Six
users quit the study while it was incomplete or did not provide full answers to the measures. Consequently, we excluded
these participants from the analysis. This resulted in 18 participants in the R condition, 20 participants in the T condition
and 16 participants in the P condition. Users self-reported as
strangers. We anticipate that some of them might have familiar with interacting with a tablet. None of them were familiar
with interactive tabletops. None of them were familiar with
digital implementations of sticky notes.
Apparatus
In the R condition, the RAMPARTS system was deployed
on three HTC NEXUS 9 tablets (8.9 in screen diagonal) as
shown in Figure 4a. Tablets were placed on a table measuring 100 cm × 130 cm. A Qualisys Oqus with 8 cameras
hanging from the ceiling of the room was used for motion
tracking with markers (0.5 cm in diameter) attached directly
to the tablets. A high table was used and two chairs were provided so that the participants could freely choose to perform
the task sitting or standing. The chairs were placed slightly
off the table to suggest that the participants were free to arrange the furniture as they wished. The clues were equally
distributed in stacks on the three tablets. The server software
logged features used throughout the experiment.
In the T condition, a tabletop application was developed for
the user study (see Figure 4b). The system provided the
same features as RAMPARTS, but it worked on the Samsung SUR40 interactive table, which implies a working area
of 88.5 cm × 49.8 cm. The table ran the Microsoft PixelSense
framework and represents a common format for an interactive
table, with a history of successful deployments (e.g. [42, 43]).
The application presents clues in three stacks of sticky notes
that use the same font and colour as RAMPARTS. Due to
the dimensions of the table, participants were initially seated
in this condition, but were reminded they could move freely
around the room.
In the P condition, participants were presented with three randomised stacks of pre-printed sticky notes containing clues
placed on a table measuring 120 cm × 120 cm, as shown in
Figure 4c. The colour of the notes and the font matched those
of the two other conditions. The chair-and-table arrangement
replicated the one in the R condition and participants were
free to choose their stance. Participants were free to modify
the pre-printed sticky notes in the P condition, as they would
in the other 2 conditions. All sessions were recorded using
a video camera and a conference microphone. Participants
expressed their consent to the recording prior to the study.
In all three conditions, a pre-printed answer sheet was available along with a single pen which they were asked to fill
out at the end. In all three conditions, the initial arrangement
of notes was similar. Three piles were placed at same spots
and in relative distance to each other. We controlled for the
different table size by ensuring that when spread out, all the
sticky-notes would require space proportionately greater than
the available surface in all the 3 conditions. Consequently, all
the experimental conditions had amount of sticky notes that
would be equally hard to spreading across the surface, forcing the users to find strategies to effectively manage space if
they wanted to have an overview of all the data.
Procedure
Each participant was individually greeted by the experimenter
and then introduced to their partner for the study. A short
demographics questionnaire was then administered. Next, the
(a) Condition R — RAMPARTS
(b) Condition T — tabletop system
(c) Condition P — printed sticky notes
Figure 4. The three experimental conditions used to evaluate RAMPARTS.
participants were introduced to the task, the functionalities
of the system in the condition then were assigned, and they
were then asked to play the role of analysts. A sample task
was prepared with the same number of clues as the crimesolving tasks, but with far less difficulty. Participants were
given as much time as they required to familiarise themselves
with the functionality of the system and the experimenter was
available to answer any questions.
Perceived Workload
Having assured that the participants were comfortable using
the system, they were introduced to the task using the task
source book [58]. Users had to juxtapose, combine, and eliminate cues to determine the murderer, the time of the murder,
the location, and the weapon. The number of clues provided
is intended to be large enough to warrant a discussion and require more than one person to solve effectively. There were
redundant clues in the set and participants were supposed to
agree to eliminate them. The experimenter then handed the
participants the answer sheet and a pen. Participants were informed that the estimated time for completing the task was
25 minutes. However, they were welcome to take as much
time as they required. We instructed participants to prioritise
accuracy over speed and be as certain as possible about the
answer before they decided they were finished.
We measured how users manipulated the sticky notes over
the duration of the sensemaking task, and how in particular
they moved them around to create new solutions. One of the
experimenters analysed the session video recordings to count
how often the sticky notes were rearranged over two time intervals: first half, second half, and in total to understand the
rhythm of sensemaking. A move was counted as a movement
of one or group of sticky notes (with a swipe or drag) or moving paper. Moving multiple notes with a single move was
only possible in the P condition (when they were glued together) and this was also counted as one move. The counting
began after the first 5 minutes of the task, to avoid differences
due to modalities and initial setup.
The solution to the puzzle was then revealed. Next, each participant individually completed a questionnaire consisting of
the “raw” NASA Task Load Index (NASA-TLX) [24] to measure the perceived workload. The study ended with a debriefing and a semi-structured interview where participants were
asked to comment on their experience of the system, the perceived difficulty of the system, their strategy of solving the
problem, and how the system supported the chosen strategy.
Measures
We used three data sources to evaluate RAMPARTS: Video
and audio recording, answer sheets with name of criminal
and associated details, and post-task survey responses about
workload and team-experience.
Task Performance: Accuracy and Task Completion Time
As two indicators of task performance we use TCT and the
quality of the answers. TCT is the time between when the
question sheet was given to the participants and when it was
handed back to the experimenter. The time was measured by
the experimenter. The accuracy of the answers on the question sheet was measured by correct answers. Each of the four
questions, name of criminal, location, time, and motive was
rated binary as right or wrong.
Each participant rated the perceived workload of the study
task with the six NASA-TLX questions, focusing on physical
demand, temporal demand, performance, effort, and frustration [25]. In modification to the original NASA-TLX, we
renounced weighting single questions. This modification is
often applied and well known as raw NASA-TLX [24].
Sticky Moves
Results
Task Performance: Accuracy
Most participants delivered correct answers (4 out of 4 murder circumstances reported correctly). One participant pair in
the T condition produced an entirely wrong answer (0 out 4
correct answers). Another group provided a partial answer (3
out of 4 answers, condition T). Overall, we saw no significant
difference in the accuracy.
Task Performance: Task completion time
TCTs were extracted from the video recordings and the results are shown in Figure 5a. The lower the TCT, the better
the Task Performance. Both, R (MR = 1320 s, S DR = 253 s)
and T (MT = 1578 s, S DT = 275 s) system performed better than P (MP = 1941 s, S DP = 499s) owing to the sensemaking features. A one-way ANOVA was performed to determine the significance of the mean differences between the
three conditions. The main effect of experimental condition
on TCT was statistically significant (F2,22 = 6.97, p = 0.005).
Despite the unequal sample sizes, Levene’s test of inequality
was found to be insignificant (p = 0.128). Gabriel post hoc
test for small variation in the sample size correction revealed
that R performed significantly better than P (p = 0.01 after
Bonferroni Correction). Results support our Hypothesis H
that sensemaking features in R shortens TCT compared to P.
With regard to RQ, low cost R performs equally well compared to high-end T
Perceived workload
NASA-TLX questionnaire results are presented in Figure 5b.
The three conditions reported similar aggregate scores:
MR = 9.40, S DR = 3.24; MT = 8.61, S DT = 2.63; and
MP = 8.16, S DP = 2.08. A Mixed Effects model with pair as
the random effect and condition as the independent variable
revealed no significant difference in the reported NASA-TLX
scores (F2,24 = 0.543, p = 0.59).
Sticky Moves
We also recorded how often the sticky notes were moved by
the participants for relevance or change existing alignment.
In the first half of the TCT, users did not manipulate sticky
notes any differently across the three conditions (MR = 41.11,
S DR = 39.20; MT = 50.90, S DT = 31.01; MP = 44.86,
S DP = 17.79; F2,24 = 0.25, p > 0.05), suggesting that
users approached the task similarly at the onset. In contrast, a
significant difference was observed for the number of sticky
notes moved in the second half of the task (MR = 16.44,
S DR = 14.83; MT = 94.2, S DT = 55.04, MP = 61.88,
S DP = 54.00, F2,24 = 6.98, p < 0.01). Post hoc tests revealed a difference in the rhythm of sensemaking because
RAMPARTS users moved significantly fewer notes around
than the T condition in the second half of the task (p < 0.05).
Additionally, a Pearson Product-Moment calculation revealed
that there was no correlation between TCT and number of
sticky notes moved (r = 0.09, p > 0.1).
Qualitative observations from RAMPARTS
Two researchers watched all the recorded video material from
condition R and the videos were coded for participant position and device management. An additional discussion was
then conducted and all coding discrepancies were eliminated.
Overall, 3 pairs out of 9 decided to sit opposite each other.
These three pairs chose to use one tablet per person as a ”personal” device and put the remaining tablet in the middle of the
table to share information. Six pairs decided to sit shoulder
to shoulder and they, consequently, chose to employ different
Figure 6. The number of sticky notes moved in the first half of the task,
the second half, and in total for the three conditions. Statistically significant results are marked with *.
tablet arrangements. Five participants spread the three tablets
horizontally or almost horizontally. These pairs would then
initially process the information on each tablet individually,
tablet by tablet before starting collaborative work. The users
would then arrange information spatially, splitting it between
the three devices. They used their finger or a pen to attract the
attention of their partner. One of the participants commented
positively on the ability to arrange the sticky notes spatially:
The ability to arrange them spatially was really good, it
makes the task easier. [Participant RA05]
Another pair used a strategy where one user dominated and
led the discussion and controlled two tablets in a line and
the other user provided relevant information from the third
tablet. Users would also often move one of the tablets up to
indicate on which device they were focusing at a given moment. One user decided to periodically remove one tablet
from the line and move above a second one to compare information (Figure 7a) — whenever they felt two facts led to a
conclusion or appeared to be contrary, the user would move
one tablet out of the line to focus the discussion on resolving the issue. That movement would be then repeated each
time a discussion was required to agree on a conclusion based
on information from two tablets. Another pair placed an Lshaped (Figure 7b) tablet structure in the very centre of the
table, interacting with the three devices as if they were a single large interactive surface. This happened once the group
individually browsed the information on the three tablets, the
L-shaped structure showed a transition between acquainting
oneself with the data and beginning a discussion with the partner. The sensemaking support features like Highlighting for
Relevance in RAMPARTS were reported as useful tools:
I really enjoyed painting it with the green colour, I didnt
have to look which one is about the same topic, with my
partner we could just click on it. [Participant RB03]
(a)
(b)
Figure 5. Mean values for task completion time (TCT) and NASA-TLX
in the experimental conditions, the error bars representing the standard
error. The statistically significant difference between R and P in TCT is
marked with a *.
One pair used a “three-dimensional” approach (Figure 7c).
One user would often hover the tablet above the other tablets
and “pour” sticky notes onto them. They would raise the
tablets above the table to direct the partner’s attention and
compare information with a tablet laying on the table. They
would also both hold their tablets up high to juxtapose their
(a) Breaking the line to compare
(b) A continuous L-shaped structure
(c) Using the tablets in 3D
Figure 7. Excerpts of study footage showing the different spatial strategies employed by the participants to facilitate solving the crime mystery.
individual findings. Another group specifically planned individual and group work phases. They spread all the notes onto
two tablets and then analysed individually, temporarily ignoring the third device. A discussion phase followed and the entire process was repeated three times with a final discussion.
Users also reflected on the tangibility of the tools and the ability to distribute the content between the different tablets for
Alignment. They also confirmed that spatial arrangements of
the tablets were use to filter and combine information:
I started assigning spaces to information, for example
this is a pile of irrelevant information. [Participant
RA04]
DISCUSSION
We found that both R and T performed better than P because
sensemaking features implemented were indeed effective and
supported our Hypothesis.
One possible explanation for R’s performance is that participants used the tablets as containers to categorise pieces
of information (for example RA04). The affordance of display bezels to been seen as information containers is well described previously in literature [61]. Devices in our study
allowed for crisp divisions between data sets, offering support for relevant information clustering. Furthermore, tablets
themselves can be moved around like sticky notes. This is an
advantage over the other 2 conditions where sticky clusters
couldn’t be moved conveniently. Our results suggest that the
tangibility of individual tablets can provide extensive support
for organising information. While this was hinted in Conductor [23], RAMPARTS shows that these affordances are also
valid in a collaborative setting. The lack of significant differences in NASA TLX results shows that the technology used
in all three conditions required a similar amount of effort from
the users.
Surprisingly, we observed a significant decrease in the number of notes moved towards the end of the task with RAMPARTS. During the second half of the experiment, sticky
notes were moved far lesser in R than in T and P. Based
on qualitative observations, As note groups could be moved
by manipulating tablets, fewer moves were needed to move
groups in R. No such physical categorization was possible in
T and P. On the other hand, even though users moved notes
far lesser in T than R, they were equally effective in terms
of TCT. Determining the cause of that difference remains an
open question.
Further, difference in the number of notes moved during the
different phases of sensemaking points to different rhythms
of sensemaking supported by different technologies. While it
must be pointed out that all conditions moved approximately
similar number of sticky notes throughout the task duration,
some conditions involved higher note manipulation towards
the end as opposed to earlier on. Perhaps, spatial awareness
reduces the need for moving notes, because relevant sticky
clusters represented by tablets acting as “bins” can be moved
around and aligned in multiple styles to perform visual comparison.
In support of our RQ, while we observed no significant difference in TCT between RAMPARTS and tabletop, we believe that R has key advantage over T, because RAMPARTS
supports creating ad-hoc environments by converting mobile
devices into a multidisplay interface. As postulated by Fjeld
et al. [11], further research is needed for turning everyday
spaces into interactive discussion environments. In contrast
to a bulky and expensive tabletop interface, RAMPARTS uses
on-body mobile devices, ready for use in casual settings. On
the other hand, we also show that the significant decrease in
TCT confirms observations previously made by Hamilton and
Wigdor [23] about multiple distributed devices being effective in supporting sensemaking.
We believe the insights presented here are not exclusive to
crime solving. Our work generalizes beyond crime-solving
tasks in other similar collaborative time critical tasks like
crisis-informatics and medicals ensemaking in hospitals. Further, technology used in spatially aware mobile devices in
RAMPARTS, would enable evidence collected by analysts
(in the field) across domains like citizen science to be seamlessly shared for colocated analytics not depending on situated high-end infrastructure. For example, citizen scientists
could gather soil samples data, and then use RAMPARTS to
perform partial analytics, generate new insights and reduce
workload, all in the field [45].
Furthermore, in domains beyond crime-solving, the specific
sensemaking features of RAMPARTS could also be useful.
For example, researchers reported temporal ordering as important for collaborative web search [48]. Overcrowding of
hospitals requires better analysis by nurses [49] could be improved by visualizing algorithmically identified relevant data
as offered by RAMPARTS. Such scenarios are likely to be encountered in domains such as emergency response [37] and
firefighting [9]. This illustrates a key advantage of RAMPARTS over tabletop — RAMPARTS can be an effective
tool to proliferate digitally-supported sensemaking to new domains as it uses mobile devices that are likely to increase in
number and availability. In contrast, tabletop interfaces are
becoming less common. As a consequence, we interpret the
lack of significant differences between T and R positive.
LIMITATIONS
RAMPARTS is primarily designed to simulate spatial awareness with multiple mobile devices for colocated collaborative sensemaking. However, RAMPARTS has been evaluated
with a limited number of devices handled by a pair of collaborators in a relatively short-lived collaborative sensemaking
task of a particular type. Scaling up the number of devices
or users might introduce potential challenges like information management across the multiple devices. Longer term
sensemaking might require provenance of analysis to ensure
that analysis performed in the past is not lost in the future.
Further, lab settings enable designers to vary design choices
in controlled settings to understand the effects of each choice
across multiple measure. Field research is needed to understand the impact of spatially aware mobile devices in real life
crime solving teams with datasets of varying sizes. We hope
that future work will address some of these challenges.
DESIGN IMPLICATIONS
The findings of our work have implications for designing
colocated distributed collaborative sensemaking tools. Since
the presence of spatial awareness had no negative effect on
task performance compared to expensive bulky tabletops, we
propose that tools should support spatial awareness to promote timely information sharing. We see spatially aware
mobile device solutions like RAMPARTS as a part of the
larger design landscape of colocated collaborative sensemaking tools. While at one end, one may design for sterile static
environments like “war rooms” replete with interactive tabletops and large wall displays. On the other end, designers
might need to support ad-hoc sensemaking tasks by analysts
on the move while crowds-on-call aid them with sensemaking tasks [14, 15]. We locate RAMPARTS somewhere in the
middle, where impromptu ad-hoc sensemaking is performed
by the analysts themselves using devices on hand. By simulating and evaluating a spatially aware system that can effectively support sensemaking, we demonstrated that spatial
awareness allowed users to organise information (transferring
sticky notes between devices) and facilitate their collaborative sensemaking process (e.g. by moving devices to manage attention). Consequently, we believe that future systems
should consider the spatial aspect of interaction and strive to
support relevance and alignment through spatially aware gestures [53].
Within the larger design landscape, designers may choose
to create solutions that interact only with other mobile devices, or might also integrate colocated non-digital artefacts
into sensemaking processes. While RAMPARTS is designed
closer to the former solution, our study revealed that some
participants used mobile devices in the space above the table
with three-dimensional interactions such as dropping postit notes onto another device as a filter. While 3D interactions
were used in past static systems (e.g. [56]), most designs have
so far suggested treating the surface of the table on which
the devices are placed as a continuous 2D interaction space
(e.g. [65]). We believe that future work should provide extensive support for 3D interactions between mobile and nonmobile devices available in the environment to enable adhoc sensemaking. This design direction furthers the National
Criminal Intelligence Plan [21] that suggests enabling novel
technologies for colocated sensemaking between analysts.In
crime-solving, investigators collect evidence in the field and
hold that data privately in their mobile devices. RAMPARTS
could enable them to share and analyze data in the field by
providing an easy, accessible way to do sensemaking using
technology they already have in hand. Consequently, we see
an emerging need to investigate how multi-device systems
could perform in crime-solving field work.
Further, designers should consider designing for different
phases of analysis. We found that spatially aware mobile
devices involve fewer movements of notes towards the latter
half of the task. Future tools could appropriate user activity
logs to identify different phases of analysis. Tools could help
generate relevant recommendations when foraging, and support alignment for storytelling during the sensemaking. For
example, foraging could be supported through recommendations based on Natural Language Processing of text. Alternatively, sensemaking could be supported through generating
alternative storylines that could explain the events. Finally,
identifying patterns of successful behaviour might help create
tools that could train the analysts to best use the time at hand
and potentially reduce the large number of unsolved crime
cases, awaiting attention.
CONCLUSIONS
In this paper, we presented the design of RAMPARTS, a
sensemaking tool that supports spatial awareness for mobile
devices in colocated collaborative environments. We presented the findings from an experiment in which pairs of
participants played the role of crime analysts collaborating
to identify a criminal. We found that RAMPARTSs spatialawareness decreased task completion time when compared
to a paper-based system, without any adverse effect on task
completion time compared to a tabletop, and without increasing perceived cognitive workload. We also discovered that
using RAMPARTS resulted in significantly decreased note
manipulation in the later stages of sensemaking, suggesting
a different rhythm of sensemaking being pursued in RAMPARTS.
Acknowledgements
This research was supported by the Adlerbertska Research
Foundation and the CELTIC Mediated Effective Remote Collaboration (MERCO) project C2013/1-3. We thank Susan R.
Fussell for her invaluable insights. We thank teams in Łódź
(Andrzej, Tomek and Tomek) and Stuttgart (Robin) for help
in conducting and analysing the study.
REFERENCES
1. Aruna D. Balakrishnan, Susan R. Fussell, and Sara
Kiesler. 2008. Do Visualizations Improve Synchronous
Remote Collaboration?. In Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems
(CHI ’08). ACM, New York, NY, USA, 1227–1236.
DOI:http://dx.doi.org/10.1145/1357054.1357246
2. Rafael Ballagas, Michael Rohs, Jennifer G Sheridan,
and Jan Borchers. 2004. BYOD: Bring Your Own
Device. Proceedings of the 2004 Workshop on
Ubiquitous Display Environments (2004).
3. Eric A. Bier, Stuart K. Card, and John W. Bodnar. 2010.
Principles and Tools for Collaborative Entity-Based
Intelligence Analysis. IEEE Transactions on
Visualization and Computer Graphics 16, 2 (March
2010), 178–191. DOI:
http://dx.doi.org/10.1109/TVCG.2009.104
4. Nicholas Chen, François Guimbretière, and Abigail
Sellen. 2013. Graduate Student Use of a Multi-slate
Reading System. In Proceedings of the 2013 SIGCHI
Conference on Human Factors in Computing Systems
(CHI ’13). ACM, New York, NY, USA, 1799–1808.
DOI:http://dx.doi.org/10.1145/2470654.2466237
5. George Chin, Jr., Olga A. Kuchar, and Katherine E.
Wolf. 2009. Exploring the Analytical Processes of
Intelligence Analysts. In Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems
(CHI ’09). ACM, New York, NY, USA, 11–20. DOI:
http://dx.doi.org/10.1145/1518701.1518704
6. Haeyong Chung, Seungwon Yang, Naveed Massjouni,
Christopher Andrews, Rahul Kanna, and Chris North.
2010. Vizcept: Supporting synchronous collaboration
for constructing visualizations in intelligence analysis.
In IEEE Symposium on Visual Analytics Science and
Technology (VAST ’10). IEEE, 107–114. DOI:
http://dx.doi.org/10.1109/VAST.2010.5652932
7. Gregorio Convertino, Helena M. Mentis, Mary Beth
Rosson, Aleksandra Slavkovic, and John M. Carroll.
2009. Supporting Content and Process Common Ground
in Computer-supported Teamwork. In Proceedings of
the SIGCHI Conference on Human Factors in
Computing Systems (CHI ’09). ACM, New York, NY,
USA, 2339–2348. DOI:
http://dx.doi.org/10.1145/1518701.1519059
8. Beth Davey. 1983. Think aloud: Modeling the cognitive
processes of reading comprehension. Journal of Reading
(1983), 44–47.
9. Sebastian Denef, Leonardo Ramirez, Tobias Dyrks, and
Gunnar Stevens. 2008. Handy Navigation in
Ever-changing Spaces: An Ethnographic Study of
Firefighting Practices. In Proceedings of the 7th ACM
Conference on Designing Interactive Systems (DIS ’08).
ACM, New York, NY, USA, 184–192. DOI:
http://dx.doi.org/10.1145/1394445.1394465
10. Kristie Fisher, Scott Counts, and Aniket Kittur. 2012.
Distributed Sensemaking: Improving Sensemaking by
Leveraging the Efforts of Previous Users. In
Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems (CHI ’12). ACM, New
York, NY, USA, 247–256. DOI:
http://dx.doi.org/10.1145/2207676.2207711
11. Morten Fjeld, Paweł Woźniak, Josh Cowls, and Bonnie
Nardi. 2015. Ad hoc encounters with big data: Engaging
citizens in conversations around tabletops. First Monday
20, 2 (Feb. 2015). DOI:
http://dx.doi.org/10.5210/fm.v20i2.5611
12. Darren Gergle, Robert E Kraut, and Susan R Fussell.
2004. Language efficiency and visual technology
minimizing collaborative effort with visual information.
Journal of language and social psychology 23, 4 (2004),
491–517.
13. Steven Gottlieb, Sheldon I Arenberg, Raj Singh, and
others. 1994. Crime analysis: From first report to final
arrest. Alpha Publishing Montclair, CA.
14. Nitesh Goyal. 2015. Designing for Collaborative
Sensemaking: Using Expert & Non-Expert Crowd.
arXiv preprint arXiv:1511.06053 (2015).
15. Nitesh Goyal and Susan R Fussell. 2015. Designing for
Collaborative Sensemaking: Leveraging Human
Cognition For Complex Tasks. arXiv preprint
arXiv:1511.05737 (2015).
16. Nitesh Goyal and Susan R Fussell. 2016. Effects of
Sensemaking Translucence on Distributed Collaborative
Analysis. (2016). http://www.cs.cornell.edu/
˜ngoyal/Goyal_CSCW2016_preprint.pdf
17. Nitesh Goyal, Gilly Leshed, Dan Cosley, and Susan R.
Fussell. 2014. Effects of Implicit Sharing in
Collaborative Analysis. In Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems
(CHI ’14). ACM, New York, NY, USA, 129–138. DOI:
http://dx.doi.org/10.1145/2556288.2557229
18. Nitesh Goyal, Gilly Leshed, and Susan R. Fussell.
2013a. Effects of Visualization and Note-taking on
Sensemaking and Analysis. In Proceedings of the
SIGCHI Conference on Human Factors in Computing
Systems (CHI ’13). ACM, New York, NY, USA,
2721–2724. DOI:
http://dx.doi.org/10.1145/2470654.2481376
19. Nitesh Goyal, Gilly Leshed, and Susan R. Fussell.
2013b. Leveraging Partner’s Insights for Distributed
Collaborative Sensemaking. In Proceedings of the 2013
Conference on Computer Supported Cooperative Work
Companion (CSCW ’13). ACM, New York, NY, USA,
15–18. DOI:
http://dx.doi.org/10.1145/2441955.2441960
20. Saul Greenberg, Nicolai Marquardt, Till Ballendat, Rob
Diaz-Marino, and Miaosen Wang. 2011. Proxemic
Interactions: The New Ubicomp? interactions 18, 1
(Jan. 2011), 42–50. DOI:
http://dx.doi.org/10.1145/1897239.1897250
21. Global Intelligence Working Group and others. 2003.
National criminal intelligence sharing plan. US
Departent of Justice 12 (2003), 2011.
22. Jonathan Haber, Miguel A. Nacenta, and Sheelagh
Carpendale. 2014. Paper vs. Tablets: The Effect of
Document Media in Co-located Collaborative Work. In
Proceedings of the International Working Conference on
Advanced Visual Interfaces (AVI ’14). ACM, New York,
NY, USA, 89–96. DOI:
http://dx.doi.org/10.1145/2598153.2598170
23. Peter Hamilton and Daniel J. Wigdor. 2014. Conductor:
Enabling and Understanding Cross-device Interaction.
In Proceedings of the 2014 SIGCHI Conference on
Human Factors in Computing Systems (CHI ’14). ACM,
New York, NY, USA, 2773–2782. DOI:
Interfaces (AVI ’06). ACM, New York, NY, USA, 27–34.
DOI:http://dx.doi.org/10.1145/1133265.1133272
31. Petra Isenberg, Danyel Fisher, Sharoda A. Paul,
Meredith Ringel Morris, Kori Inkpen, and Mary
Czerwinski. 2012. Co-Located Collaborative Visual
Analytics Around a Tabletop Display. IEEE
Transactions on Visualization and Computer Graphics
18, 5 (May 2012), 689–702. DOI:
http://dx.doi.org/10.1109/TVCG.2011.287
32. Jeroen Janssen, Gijsbert Erkens, Gellof Kanselaar, and
Jos Jaspers. 2007. Visualization of participation: Does it
contribute to successful computer-supported
collaborative learning? Computers & Education 49, 4
(2007), 1037–1065. DOI:
http://dx.doi.org/10.1016/j.compedu.2006.01.004
http://dx.doi.org/10.1145/2556288.2557170
24. Sandra G Hart. 2006. NASA-task load index
(NASA-TLX); 20 years later. In Proceedings of the
human factors and ergonomics society annual meeting,
Vol. 50. Sage Publications, 904–908. DOI:
http://dx.doi.org/10.1177/154193120605000909
25. Sandra G. Hart and Lowell E. Staveland. 1988.
Development of NASA-TLX (Task Load Index):
Results of Empirical and Theoretical Research. In
Human Mental Workload, Peter A. Hancock and
Najmedin Meshkati (Eds.). Advances in Psychology,
Vol. 52. North-Holland, 139 – 183. DOI:
http://dx.doi.org/10.1016/S0166-4115(08)62386-9
26. Khalad Hasan, David Ahlström, and Pourang Irani.
2013. Ad-binning: leveraging around device space for
storing, browsing and retrieving mobile device content.
Proceedings of the 2013 SIGCHI Conference on Human
Factors in Computing Systems (2013), 899–908. DOI:
http://dx.doi.org/10.1145/2470654.2466115
27. Jeffrey Heer and Maneesh Agrawala. 2008. Design
considerations for collaborative visual analytics.
Information visualization 7, 1 (2008), 49–62.
28. Jeffrey Heer, Fernanda B. Viégas, and Martin
Wattenberg. 2009. Voyagers and Voyeurs: Supporting
Asynchronous Collaborative Visualization.
Communications of the ACM - Rural engineering
development 52, 1 (Jan. 2009), 87–97. DOI:
http://dx.doi.org/10.1145/1435417.1435439
29. Tobias Hesselmann, Niels Henze, and Susanne Boll.
2010. FlashLight: Optical Communication Between
Mobile Phones and Interactive Tabletops. In ACM
International Conference on Interactive Tabletops and
Surfaces (ITS ’10). ACM, New York, NY, USA,
135–138. DOI:
http://dx.doi.org/10.1145/1936652.1936679
30. Uta Hinrichs, Sheelagh Carpendale, and Stacey D. Scott.
2006. Evaluating the Effects of Fluid Interface
Components on Tabletop Collaboration. In Proceedings
of the 2006 Working Conference on Advanced Visual
33. Sirkka L Jarvenpaa and D Sandy Staples. 2000. The use
of collaborative electronic media for information
sharing: an exploratory study of determinants. The
Journal of Strategic Information Systems 9, 2 (2000),
129–154. DOI:
http://dx.doi.org/10.1016/S0963-8687(00)00042-1
34. Tero Jokela, Jarno Ojala, and Thomas Olsson. 2015. A
Diary Study on Combining Multiple Information
Devices in Everyday Activities and Tasks. In
Proceedings of the 33rd Annual ACM Conference on
Human Factors in Computing Systems (CHI ’15). ACM,
New York, NY, USA, 3903–3912. DOI:
http://dx.doi.org/10.1145/2702123.2702211
35. Ruogu Kang and Sara Kiesler. 2012. Do Collaborators’
Annotations Help or Hurt Asynchronous Analysis. In
Proceedings of the ACM Conference on Computer
Supported Cooperative Work Companion (CSCW ’12).
ACM, New York, NY, USA, 123–126. DOI:
http://dx.doi.org/10.1145/2141512.2141558
36. Paul Keel. 2006. Collaborative Visual Analytics:
Inferring from the Spatial Organization and
Collaborative Use of Information. In 2006 IEEE
Symposium On Visual Analytics And Technology. IEEE,
137–144. DOI:
http://dx.doi.org/10.1109/VAST.2006.261415
37. Jonas Landgren and Urban Nulden. 2007. A Study of
Emergency Response Work: Patterns of Mobile Phone
Interaction. In Proceedings of the SIGCHI Conference
on Human Factors in Computing Systems (CHI ’07).
ACM, New York, NY, USA, 1323–1332. DOI:
http://dx.doi.org/10.1145/1240624.1240824
38. Ming Li and Leif Kobbelt. 2012. Dynamic Tiling
Display: Building an Interactive Display Surface Using
Multiple Mobile Devices. In Proceedings of the 11th
International Conference on Mobile and Ubiquitous
Multimedia (MUM ’12). ACM, New York, NY, USA,
Article 24, 4 pages. DOI:
http://dx.doi.org/10.1145/2406367.2406397
39. Andrés Lucero, Jussi Holopainen, and Tero Jokela.
2011. Pass-them-around: collaborative use of mobile
phones for photo sharing. In Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems
(CHI ’11). ACM Press, New York, 1787. DOI:
http://dx.doi.org/10.1145/1978942.1979201
40. Andrés Lucero, Tero Jokela, Arto Palin, Viljakaisa
Aaltonen, and Jari Nikara. 2012. EasyGroups: binding
mobile devices for collaborative interactions. In
Extended Abstracts of the SIGCHI Conference on
Human Factors in Computing Systems (CHI EA ’12).
ACM Press, New York, 2189. DOI:
http://dx.doi.org/10.1145/2212776.2223774
41. Andrés Lucero, Jaakko Keränen, and Hannu Korhonen.
2010. Collaborative Use of Mobile Phones for
Brainstorming. In Proceedings of the 12th International
Conference on Human Computer Interaction with
Mobile Devices and Services (MobileHCI ’10). ACM,
New York, NY, USA, 337–340. DOI:
http://dx.doi.org/10.1145/1851600.1851659
42. Sebastian Marwecki, Roman Rädle, and Harald Reiterer.
2013. Encouraging Collaboration in Hybrid Therapy
Games for Autistic Children. In CHI ’13 Extended
Abstracts on Human Factors in Computing Systems
(CHI EA ’13). ACM, New York, NY, USA, 469–474.
DOI:http://dx.doi.org/10.1145/2468356.2468439
43. Robert Morawa, Tom Horak, Ulrike Kister, Annett
Mitschick, and Raimund Dachselt. 2014. Combining
Timeline and Graph Visualization. In Proceedings of the
Ninth ACM International Conference on Interactive
Tabletops and Surfaces (ITS ’14). ACM, New York, NY,
USA, 345–350. DOI:
http://dx.doi.org/10.1145/2669485.2669544
44. Miguel A. Nacenta, Mikkel R. Jakobsen, Remy
Dautriche, Uta Hinrichs, Marian Dörk, Jonathan Haber,
and Sheelagh Carpendale. 2012. The LunchTable: A
Multi-user, Multi-display System for Information
Sharing in Casual Group Interactions. In Proceedings of
the International Symposium on Pervasive Displays
(PerDis ’12). ACM, New York, NY, USA, Article 18, 6
pages. DOI:
http://dx.doi.org/10.1145/2307798.2307816
45. Greg Newman, Andrea Wiggins, Alycia Crall, Eric
Graham, Sarah Newman, and Kevin Crowston. 2012.
The future of citizen science: emerging technologies and
shifting paradigms. Frontiers in Ecology and the
Environment 10, 6 (2012), 298–304.
46. Janni Nielsen, Torkil Clemmensen, and Carsten Yssing.
2002. Getting Access to What Goes on in People’s
Heads?: Reflections on the Think-aloud Technique. In
Proceedings of the Second Nordic Conference on
Human-computer Interaction (NordiCHI ’02). ACM,
New York, NY, USA, 101–110. DOI:
http://dx.doi.org/10.1145/572020.572033
47. Antti Oulasvirta. 2008. FEATURE: When Users ”Do”
the Ubicomp. interactions 15, 2 (March 2008), 6–9.
DOI:http://dx.doi.org/10.1145/1340961.1340963
48. Sharoda A. Paul and Meredith Ringel Morris. 2009.
CoSense: Enhancing Sensemaking for Collaborative
Web Search. In Proceedings of the SIGCHI Conference
on Human Factors in Computing Systems (CHI ’09).
ACM, New York, NY, USA, 1771–1780. DOI:
http://dx.doi.org/10.1145/1518701.1518974
49. Sharoda A. Paul and Madhu C. Reddy. 2010.
Understanding Together: Sensemaking in Collaborative
Information Seeking. In Proceedings of the 2010 ACM
Conference on Computer Supported Cooperative Work
(CSCW ’10). ACM, New York, NY, USA, 321–330.
DOI:http://dx.doi.org/10.1145/1718918.1718976
50. Tommaso Piazza, Morten Fjeld, Gonzalo Ramos,
AsimEvren Yantac, and Shengdong Zhao. 2013. Holy
Smartphones and Tablets, Batman!: Mobile Interaction’s
Dynamic Duo. In Proceedings of the 11th Asia Pacific
Conference on Computer Human Interaction (APCHI
’13). ACM, New York, NY, USA, 63–72. DOI:
http://dx.doi.org/10.1145/2525194.2525205
51. Peter Pirolli and Stuart Card. 2005. The sensemaking
process and leverage points for analyst technology as
identified through cognitive task analysis. In
Proceedings of international conference on intelligence
analysis, Vol. 5. 2–4.
52. Roman Rädle, Hans-Christian Jetter, Nicolai Marquardt,
Harald Reiterer, and Yvonne Rogers. 2014.
HuddleLamp: Spatially-Aware Mobile Displays for
Ad-hoc Around-the-Table Collaboration. In Proceedings
of the Ninth ACM International Conference on
Interactive Tabletops and Surfaces (ITS ’14). ACM
Press, New York, 45–54. DOI:
http://dx.doi.org/10.1145/2669485.2669500
53. Roman Rädle, Hans-Christian Jetter, Mario Schreiner,
Zhihao Lu, Harald Reiterer, and Yvonne Rogers. 2015.
Spatially-aware or Spatially-agnostic?. In Proceedings
of the ACM Conference on Human Factors in
Computing Systems (CHI ’15). ACM Press, New York,
New York, USA, 3913–3922. DOI:
http://dx.doi.org/10.1145/2702123.2702287
54. Stephanie Santosa and Daniel Wigdor. 2013. A Field
Study of Multi-device Workflows in Distributed
Workspaces. In Proceedings of the ACM International
Joint Conference on Pervasive and Ubiquitous
Computing (UbiComp ’13). ACM, New York, NY, USA,
63–72. DOI:
http://dx.doi.org/10.1145/2493432.2493476
55. Stacey D. Scott, M. Sheelagh T. Carpendale, and
Kori M. Inkpen. 2004. Territoriality in Collaborative
Tabletop Workspaces. In Proceedings of the 2004 ACM
Conference on Computer Supported Cooperative Work
(CSCW ’04). ACM, New York, NY, USA, 294–303.
DOI:http://dx.doi.org/10.1145/1031607.1031655
56. Martin Spindler. 2012. Spatially aware tangible display
interaction in a tabletop environment. Proceedings of the
2012 ACM international conference on Interactive
tabletops and surfaces (2012), 277. DOI:
http://dx.doi.org/10.1145/2396636.2396679
57. Martin Spindler, Wolfgang Büschel, Charlotte Winkler,
and Raimund Dachselt. 2014. Tangible displays for the
masses: Spatial interaction with handheld displays by
using consumer depth cameras. Personal and
Ubiquitous Computing 18 (2014), 1213–1225. DOI:
http://dx.doi.org/10.1007/s00779-013-0730-7
58. Gene Stanford and Barbara Dodds Stanford. Learning
Discussion Skills Through Games. Citation Press. 75
pages.
59. Anthony Tang, Melanie Tory, Barry Po, Petra Neumann,
and Sheelagh Carpendale. 2006. Collaborative Coupling
over Tabletop Displays. In Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems
(CHI ’06). ACM, New York, NY, USA, 1181–1190.
DOI:http://dx.doi.org/10.1145/1124772.1124950
60. James R. Wallace, Stacey D. Scott, and Carolyn G.
MacGregor. 2013. Collaborative Sensemaking on a
Digital Tabletop and Personal Tablets: Prioritization,
Comparisons, and Tableaux. In Proceedings of the
SIGCHI Conference on Human Factors in Computing
Systems (CHI ’13). ACM, New York, NY, USA,
3345–3354. DOI:
http://dx.doi.org/10.1145/2470654.2466458
61. James R. Wallace, Daniel Vogel, and Edward Lank.
2014. Effect of Bezel Presence and Width on Visual
Search. In Proceedings of the International Symposium
on Pervasive Displays (PerDis ’14). ACM, New York,
NY, USA, Article 118, 6 pages. DOI:
http://dx.doi.org/10.1145/2611009.2611019
62. C. Weaver. 2007. Is Coordination a Means to
Collaboration?. In Fifth International Conference on
Coordinated and Multiple Views in Exploratory
Visualization (CMV ’07). 80–84. DOI:
http://dx.doi.org/10.1109/CMV.2007.15
63. Sebastian Weise, John Hardy, Pragya Agarwal, Paul
Coulton, Adrian Friday, and Mike Chiasson. 2012.
Democratizing Ubiquitous Computing: A Right for
Locality. In Proceedings of the 2012 ACM Conference
on Ubiquitous Computing (UbiComp ’12). ACM, New
York, NY, USA, 521–530. DOI:
http://dx.doi.org/10.1145/2370216.2370293
64. Paweł Woźniak, Lars Lischke, Benjamin Schmidt,
Shengdong Zhao, and Morten Fjeld. 2014a. Thaddeus:
A Dual Device Interaction Space for Exploring
Information Visualisation. In Proceedings of the 8th
Nordic Conference on Human-Computer Interaction:
Fun, Fast, Foundational (NordiCHI ’14). ACM, New
York, NY, USA, 41–50. DOI:
http://dx.doi.org/10.1145/2639189.2639237
65. Paweł Woźniak, Benjamin Schmidt, Lars Lischke,
Zlatko Franjcic, Asim Evren Yantaç, and Morten Fjeld.
2014b. MochaTop: Building Ad-hoc Data Spaces with
Multiple Devices. In CHI ’14 Extended Abstracts on
Human Factors in Computing Systems (CHI EA ’14).
ACM, New York, NY, USA, 2329–2334. DOI:
http://dx.doi.org/10.1145/2559206.2581232
66. William Wright, David Schroh, Pascale Proulx, Alex
Skaburskis, and Brian Cort. 2006. The Sandbox for
Analysis: Concepts and Methods. In Proceedings of the
SIGCHI Conference on Human Factors in Computing
Systems (CHI ’06). ACM, New York, NY, USA,
801–810. DOI:
http://dx.doi.org/10.1145/1124772.1124890
67. Kathryn Zickuhr and Lee Rainie. 2013. Tablet
ownership 2013. Pew Research Center report,
pewinternet.com (2013), 11. http:
//fullrss.net/r/http/pewinternet.org/˜/media/
Files/Reports/2013/PIP_Tabletownership2013.pdf