Situated Analytics: Where to Put the Abstract Data?

Situated Analytics: Where to Put the Abstract Data?
Neven A. M. ElSayed
Ross T. Smith
Bruce H. Thomas
Wearable Computer Lab
University of South Australia
[email protected]
Wearable Computer Lab
University of South Australia
[email protected]
Wearable Computer Lab
University of South Australia
[email protected]
(a)
(b)
(c)
Figure 1: Chameleon technique for abstract data representation employed in a shopping scenario to gather nutrition-based
information. A user assigns a serving amount with an AR slider and the abstract data is show on the user’s hand with a colour
encoding to reflect a daily calorie budget. (a) The user assigns a serving size of 25% shown by recolouring their hand to green
colour indicating their consumption is below the daily calorie budget. (b) Updating the selection to 50% changes the hand's
colour to yellow indicating the calorie consumption will be close to the daily allowance. (c) Finally, Increased the serving size to
100% alters the user hand's colour to red indicating the daily calorie budget has been exceeded.
Computer Graphics—Methodology
Interaction Techniques.
ABSTRACT
Visual clutter from the background is one of the main
challenges facing information visualisation in augmented
reality. Abstract representation is the overall information
visualisation resulted from the user interaction and data
aggregation. "Where to represent the abstract data in the
user’s view?" is one of the key questions for the emerging
field of Situated Analytics. This paper presents
"Chameleon" and "Midas" abstract visualisation
techniques for augmented reality applications to
overcome the clutter challenge. Chameleon employs the
user’s hands as an abstract representation canvas, and
Midas allow users to assign the canvas to physical objects
by touching the objects. Both techniques propose a
potential solution for blending the abstract representations
into interactive in-situ augmented reality applications.
and
Techniques
INTRODUCTION
Many existing Augmented Reality (AR) visualisation
approaches investigates the registration of virtual content
in the real scene [1, 2]. Placing abstract visualisation in
the real scene is challenging, as it can represent data that
has no spatial relationship with the real scene. The
location of the abstract visualisation might increase the
visual cluttering due to the disconnection between the real
context and the visualisation.
Traditional AR data representation approaches use image
segmentations [3] and surface mapping [4] to calculate
the optimal zones within the real scene for the virtual
content overlays. The resulting image analysis of the real
scene dynamically registers the data as an overlayed
annotation in the optimal location. This dynamically
changing location, however, may result in perceptual
confusion for the user as they are continually following
this updating position.
Author Keywords
Situated Analytics; Immersive Analytics; Information
Visualization; Abstract representation; augmented
Reality; Blended Space; Blended Interaction; Scene
Manipulations.
With the increasing interest of in-situ interaction [5],
abstract data visualisation became one of the key
components of interactive AR visualisation systems [6].
Recently, Situated Analytics (SA) was introduced as a
method of analytical interactive visualisation in AR [7].
SA enables the users to interact with AR space (real and
physical world) helping users to inform better decisions
based on the generated visualisation. ElSayed et al. [8, 9]
classified the visual components of the interactive
visualisation to two main types situated and abstract.
Situated visualisation represents the virtual content which
is related to the physical objects in the real scene, is a
more traditional AR approach to virtual information
overlays. The abstract visualisation represents the overall
information generated by the user interaction with the real
ACM Classification Keywords
H.5.1 [Information Interfaces and Presentation]:
Multimedia Information Systems—Artificial, augmented
and virtual realities; I.3.6 [Computing Methodologies]:
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. Copyrights
for components of this work owned by others than ACM must be
honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior
specific permission and/or a fee. Request permissions from
[email protected].
OzCHI '16, November 29 – December 2 2016, Launceston, TAS,
Australia
Copyright © 2016 ACM 978-1-4503-3673-4/15/12... $15.00
http://dx.doi.org/xx.xxxx/xxxxxxx.xxxxxxx.
1
scene. Traditionally this form of information has been
presented as screen relative information [10].
hierarchy, to alter the annotations' size and detailed level
based on its distance from the user's view. Recently,
Tatzgern et al. [18] proposed the Hedgehog labelling,
layout management techniques for moving objects to
visual clutter from AR annotation. They employ two
techniques the pole-based and plan-based. Recent
techniques tried not only to register the data but also to
blend it in the real scene, using surface mapping [4],
object edge detection [3], and scene manipulation [19,
20].
Two of the challenging parameters of AR interaction are
the large interaction space (the real scene) and secondly
the increasing need for the abstract representation to track
the user's overall data in this extended interaction space.
We need to address a set of requirements for abstract data
representation in AR that does not increase the visual
clutter to the user. The research questions addressed in
this paper can be summarised as follow:
Abstract information does not suffer from the spatial
relationship challenge, as the information may be placed
in many different viewable locations to the user.
However, abstract information is affected by the clutter
challenge, as the background interferes with the user’s
perception and might make the visualisation difficult to
understand. Abstract visualisation is not well investigated
in the AR research community. Most of the existing
approaches use space-filling approaches, by finding either
less cluttered space or the most obvious for the user’s
perception. White et al. [21] visualisation investigations
are considered to be one of the early approaches that have
represented abstract information for AR, showing the
CO2 level in the air. Kalkofen et al. [2] classify this
approach as context-based information. Overlaying
information on a separated layer [22] is a common
method for abstract visualisation. However this approach
presents a difficult transition for the user between the
presented information and the overall information.


Where should we put abstract data?
How and when should we present the abstract
visualisation?
 How can we make the abstract visualisation easy to
understanding and perceive?
In this paper, we present Chameleon and Midas, blended
abstract representation techniques for SA. Chameleon
employs the user’s hands for a canvas. The hands are
selected as they minimally reduce the contextual
information in the remaining interaction space. The main
advantage of Chameleon is that it retains the contextual
features for the user’s observation and decisions. The
Midas approach allows users to assign the abstract
visualisation's canvas to a physical object by touching to
that object. This technique enables a user to assign the
canvas to a zone that might have spatial cue or a better
view from the user's perspective.
BACKGROUND
Virtual data registration and overlaying are key features
of AR visualisation, which affect information
understanding and perception [11-13]. A major area of
AR investigation is the development of techniques for
annotation management [14-16]. One of the challenges is
to overcome cluttered backgrounds in AR and select a
location where the virtual data does not conflict with the
visual background and retains a spatial relationship
between the virtual and the real scene. ElSayed et al. [8]
have categorised AR information representations into two
types situated and abstract representation. The situated
visualisation augments the virtual annotations to the real
scene, with explicit representation of the spatial
relationship. The abstract representations overlay the
overall information without cluttering the real scene, but
the abstract representations do not have a spatial
relationship with the real scene.
CHAMELEON
Chameleon employs a diminished reality [23] occupation
approach, using the user’s hand as a visualisation canvas.
The user’s hand is chosen as the canvas as it has a lower
priority than the contextual features of the physical
objects of interest. This technique blends a colour-coded
value onto the user’s hand. Figure 1 shows the use of the
chameleon technique in a shopping context.
Demonstrating the user assigning a serving quantity of a
product with a virtual slider, similar to opportunistic
tangible user interfaces [24] and ephemeral interactions
[25]. The abstract visualisation represents the calorie
level calculated from the serving amount and the total
pre-stored calorie budget of the user. The colours start
from green for a healthy serving amount, passing through
yellow and orange, until red for an unhealthy serving
amount. Figure 1-a shows the user assigning a serving
amount of 25% of the amount in the box, blending green
colour to the user's hand reflecting that the assigned
serving amount is healthy calculated to the stored
nutrition functions and the user's fitness goal. Figure 1-b
shows the user increasing the serving amount to be 50%
of the box’s content, which converted the user hand from
green to yellow showing that the assigned value will
consume a large amount of the total calorie budget.
Finally, Figure 1-c shows the user increasing the serving
amount to the entire box, which converted the user's hand
to a red colour showing that the assigned value has
violated the pre-stored calorie budget.
Most of the existing situated visualisation approaches
solve the clutter challenges by calculating the registration
location to enhance the visual perception and to reduce
the visual cluttering. Azuma and Furmanski [17] have
introduced one of the initial clustering approaches in AR.
They developed algorithms to reduce labels' overlapping
and to merge the duplicated labels; they demonstrated
their technique is easier to read clustered text labels better
than the original cluttered view. Azuma and Furmanski
also used a layout technique to arrange the clustering
output. Bell et al. [10] have proposed a view management
technique for tree data, using a combination of filtering
and displacement approaches to layout the data. The
visualisation is based on the user's sight view and the data
We implemented the Chameleon techniques using
OpenGL Shading Language in Unity 3D. Two shader
2
algorithms were developed for hand segmentation and
colour blending. The hand segmentation is implemented
with an OpenGL shader for fast skin colour detection in
real-time. The hand extraction algorithm converts the
camera input stream from RGB to YUV colour space,
extracting the user's hand based on skin’s colours. The
skin colours are assigned using a calibration tool as
shown in Figure 2. The calibration tool has a set of
sliders used to assign the YUV values of the users' hand
colour, which is associated with visual feedback to show
the extracted regions of the assigned values. In the future,
we will investigate automatic hand detection and skin
detection algorithms. Simple detection techniques are
used to establish the interactive visualisation techniques.
bag’s abstract representation (Figure 3-c). The detailed
view of the user's arm shows a breakdown chart of the
consumed calories, using colour icons to relate the
graphical representation with the products in the bag.
(a)
The colour-blending algorithm merges a colour code
value with the colour of the skin on the user’s hand. The
algorithm uses the green and red channels to alter the
RGB hand's output colour. The developed techniques
used a colour code approaches. However, more
visualisations can be developed to display more graphical
structures and to support data expansion.
(b)
(c)
Figure 3: Midas touch abstract data canvas selection. (a)
The chameleon technique. (b) Moving the abstract canvas to
a bag with Midas touch. (c) The user assigned his arm to
expand the abstract information of the bag, showing a
detailed breakdown of the information.
With the rapid enhancement of image processing and
objects recognition for AR systems, there is an increase in
the potential uses of Chameleon and Midas Touch
techniques for platforms such as the Microsoft HoloLens
head worn device.
Figure 2: Hand extraction using colour segmentation
MIDAS TOUCH, A STEP FURTHER
The existing AR solutions register the virtual annotation
based on image analysis [4, 14], calculating the best
location for the virtual augmentation and based on the
spatial relationship between the virtual content and the
real scene. As mentioned in the previous section, the
Chameleon approach uses lower priority contextual zones
for information augmentation, such as the users’ hand.
However, the user’s hand for a canvas is not always
within the user’s view, and the canvas might be limited in
size for a particular data representation.
(a)
(b)
Figure 4: Midas Touch implementation. (a) The abstract
visualisation was blended to the user's hand (chameleon) (b)
the user touch the bag to assign the a new canvas for the
abstract visualisation.
We present Midas Touch, an extended concept of the
Chameleon technique, allowing users to assign physical
objects to be abstract representation canvas by touching
the objects' surface. Figure 3 depicts the concept of Midas
Touch in a shopping context, showing a user picking
some products in a supermarket and reflect the abstract
information on the user's hands (Figure 3-a). The user
walks between the supermarket aisles, putting the
products in his bag (Figure 3-b). The user then decides to
move the representation canvas from his hand to the bag,
for better perception. The user touches the bag,
transferring the abstract representation canvas to the bag.
Figure 4 shows an initial implementation of the Midas
Touch using the same colour segmentation that has been
used for Chameleon. Vuforia SDK was used for collision
detection between the hand and the shopping bag. The
collision is calculated between the user’s hand and the
selected products in the shopping bag.
Figure 4-a shows the user’s hand colour-coded to
represent the calorie statues, starting from green for small
energy consumption to red for high ones. The user then
touches the bag to transfer the abstract canvas to the bag,
turning off the chameleon hand visualisation. As
mentioned in the Midas Touch method (Figure 3), the
user can keep both hand and bag canvas multiple depths
of the information, such as using the hand for detailed
view and the bad for the overall value (Figure 5). The
user holds a box of crackers, which converted the user’s
hand to green, showing that the cracker’s energy is within
The orange blended-colour of the shopping bag reflects
that the total products' calories have exceeded the prestored calorie budget. The user then investigates how the
calorie total has exceeded the allowance by sliding their
finger on their left arm exploring a detailed view of the
3
the healthy serving range. However, when the user places
the crackers box bag, it has converted to red, showing that
the total product calorie is higher than the budget limit.
(a)
Technology. 1996. International Society for Optics and
Photonics.
12.Kruijff, E., J.E. Swan II, and S. Feiner. Perceptual
issues in augmented reality revisited. in ISMAR. 2010.
13.Pirolli, P. and S. Card. The sensemaking process and
leverage points for analyst technology as identified
through cognitive task analysis. in Proceedings of
international conference on intelligence analysis. 2005.
14.Bell, B., T. Höllerer, and S. Feiner. An annotated
situation-awareness aid for augmented reality. in
Proceedings of the 15th annual ACM symposium on
User interface software and technology. 2002. ACM.
15.Feiner, S., et al., A touring machine: Prototyping 3D
mobile augmented reality systems for exploring the
urban environment. Personal Technologies, 1997. 1(4):
p. 208-217.
16.Höllerer, T. and S. Feiner, Mobile augmented reality.
Telegeoinformatics: Location-Based Computing and
Services. Taylor and Francis Books Ltd., London, UK,
2004. 21.
17.Azuma, R. and C. Furmanski. Evaluating label
placement for augmented reality view management. in
Proceedings of the 2nd IEEE/ACM international
Symposium on Mixed and Augmented Reality. 2003.
IEEE Computer Society.
18.Tatzgern, M., et al. Hedgehog labeling: View
management techniques for external labels in 3D space.
in Virtual Reality (VR), 2014 iEEE. 2014. IEEE.
19.Kalkofen, D., E. Mendez, and D. Schmalstieg.
Interactive focus and context visualization for
augmented reality. in Proceedings of the 2007 6th IEEE
and ACM International Symposium on Mixed and
Augmented Reality. 2007. IEEE Computer Society.
20.ElSayed, N.A.M., R.T. Smith, and B.H. Thomas.
HORUS EYE: See the Invisible Bird and Snake Vision
for Augmented Reality Information Visualization. in
IEEE International Symposium on Mixed and
Augmented Reality. 2016. IEEE.
21.White, S. and S. Feiner. SiteLens: situated
visualization techniques for urban site visits. in
Proceedings of the SIGCHI conference on human
factors in computing systems. 2009. ACM.
22.Back, M., et al. The virtual chocolate factory:
Building a real world mixed-reality system for
industrial collaboration and control. in Multimedia and
Expo (ICME), 2010 IEEE International Conference on.
2010. IEEE.
23.Herling, J. and W. Broll. Advanced self-contained
object removal for realizing real-time diminished
reality in unconstrained environments. in 9th IEEE
International Symposium on Mixed and Augmented
Reality (ISMAR) 2010. IEEE.
24.Henderson, S. and S. Feiner, Opportunistic tangible
user interfaces for augmented reality. IEEE
Transactions on Visualization and Computer Graphics,
2010. 16(1): p. 4-16.
25.Walsh, J.A., S. von Itzstein, and B.H. Thomas.
Ephemeral interaction using everyday objects. in
Proceedings of the Fifteenth Australasian User
Interface Conference-Volume 150. 2014. Australian
Computer Society, Inc.
(b)
Figure 5: Midas and chameleon for multi-depth information
visualisation. (a) The hand represent the product-based data
(b) the bag represents the overall consumption.
CONCLUSION AND FUTURE WORK
This paper presents Chameleon and Midas Touch, two
blending visualisation approaches for Situated Analytics
abstract visualisation. Chameleon is a diminished reality
approach that uses the low priority zone of the user’s
hand in the real scene for abstract information
visualisation, and the technique blends colour coding
values to the user’s hand. Midas Touch enables the user
to assign any physical objects to be an abstract
visualisation canvas. Both techniques present a potential
solution to embed the abstract visualisation scene,
without increasing the visual cluttering or hiding the
contextual features of the physical objects.
REFERENCES
1. Slay, H., et al. Interaction modes for augmented reality
visualization. in Proceedings of the 2001 Asia-Pacific
symposium on Information visualisation-Volume 9.
2001. Australian Computer Society, Inc.
2. Kalkofen, D., et al., Visualization techniques for
augmented reality. 2011: Springer.
3. Langlotz, T., et al., Next-generation augmented reality
browsers: rich, seamless, and adaptive. Proceedings of
the IEEE, 2014. 102(2): p. 155-169.
4. Langlotz, T., et al., Robust detection and tracking of
annotations for outdoor augmented reality browsing.
Computers & graphics, 2011. 35(4): p. 831-840.
5. Piekarski, W. and B.H. Thomas. Tinmith-Hand: unified
user interface technology for mobile outdoor
augmented reality and indoor virtual reality. in
Proceedings IEEE Virtual Reality. 2002.
6. Chandler, T., et al. Immersive Analytics. in Big Data
Visual Analytics (BDVA), 2015. 2015. IEEE.
7. ElSayed, N., et al. Situated Analytics. in Big Data
Visual Analytics (BDVA), 2015. 2015. IEEE.
8. ElSayed, N.A., et al. Using augmented reality to
support situated analytics. in Virtual Reality (VR), 2015
IEEE. 2015. IEEE.
9. ElSayed, N.A.M., et al., Situated Analytics:
Demonstrating immersive analytical tools with
Augmented Reality. Journal of Visual Languages &
Computing, 2016. 36: p. 13-23.
10.Bell, B., S. Feiner, and T. Höllerer. View management
for virtual and augmented reality. in Proceedings of the
14th annual ACM symposium on User interface
software and technology. 2001. ACM.
11.Drascic, D. and P. Milgram. Perceptual issues in
augmented reality. in Electronic Imaging: Science &
4