Automated Map Content for Perfect Seams: Cognitive

Automated Map Content for Perfect Seams: Cognitive Engineering in
Mixed-Scale and Mixed-Space Applications
Lucas Godfrey*1, William Mackaness†1
1
School of GeoSciences, The University of Edinburgh
January 13, 2017
Summary
We present an approach to extending previous work related to non-lateral geospatial queries on graph
structures such that we can surface single journey views for complex multimodal routes. The focus is
on automating the provision of map content to support cognitively smooth transitions between scales
and levels of schematisation on mobile devices. The aim is to reduce the cognitive load of wayfinding
information while maintaining an element of ‘seamfulness’; that is to say a user experience of
navigational applications that is not entirely passive and encourages engagement with the overall
structure and character of the geography.
KEYWORDS: spatial cognitive engineering; automated cartography; mixed-space; multimodal
travel; wayfinding.
1. Introduction
As we travel across complex cities, we use a variety of information types, often accessed on a small
screen device. Each of these ’views’ are essentially discrete, and it is left to the cognitive effort of
users to ‘stitch’ the information together so they can make spatial decisions and effectively complete
their journey. This information may include maps at different levels of detail, and maps of alternate
network schematisations that must be accessed by changing to an alternate app or navigating to a
different web page. We propose that the cognitive effort of making effective spatial decisions could
be reduced by automating map content such that varying information types are integrated into
heterogeneous views of the geography to provide a balance of functional and contextual information,
taking account of the current context of the map user.
In this paper, we propose applying insights from spatial cognition to the challenge of surfacing
heterogeneous views of the geography across both mixed-scale and mixed-space contexts. By mixedscale we mean the integration of multiple levels of detail into single map views. By mixed-space, we
mean the integration of topological and topographic representations.
1.1. Automated cartography: seamlessness vs perfect seams
When we consider automation, a central consideration is whether we are aiming for a ‘seamless’
experience or a ‘seamful’ experience (Weiser, 1994). Seamlessness implies a passive role for the user,
where computation is almost entirely hidden from view and the user is essentially directed through a
set of instructions (or a process takes place without any interaction at all); in wayfinding, this can be
seen by way of support for a simple route orientation strategy as opposed to a higher-level survey
orientation strategy (Skagerlund et al., 2012). ‘Seamfulness’ implies that the underlying data is
exposed to the user such that the user has some conception of the mechanics behind why a particular
piece of advice or information was given by the system (Jipp et al., 2016). In relation to spatial
cognition, Richter et al. (2015) discuss ‘perfect seams’, whereby a system for assisting in spatial
decision-making balances the ‘two components of navigation’ (Montello, 2005): “the near automatic,
subconscious locomotion, and the goal-directed, conscious planning component of wayfinding…”
*
†
[email protected]
[email protected]
(Richter et al., 2015). In the context of the graphical display of geospatial data, this ambition
provokes a range of conceptual and technical challenges which indicate a need to re-think both how
spatial data may be formalised in a database, and how this data model may then be surfaced to the
map user in such a way as to support spatial decision-making for complex journeys (e.g. journeys that
traverse multiple forms of transport). It should be noted that the notion of ‘seamfulness’ (Barkhuus et
al., 2011) in the context of automated services usually implies a heterogeneous assemblage of data
and processes that most likely involves multiple underlying databases. In the research presented here
however, a key assumption is the possibility of creating a single database that may be queried to
surface geospatial data from varying transport networks, for example the geometries of both the
metric Cartesian view of the topography, and the topological schematic view that is used to simplify
network representation.
1.2. From mixed-scale to mixed-space: creating perfect seams for multimodal wayfinding
The prevailing approach to storing, querying and ultimately visualising geospatial information is
based on a striated view of space in which objects are generically grouped together based on scale.
This approach is based on ‘lateral’ queries through our hierarchical conceptualisation of geography –
from global through to local, with a lateral query retrieving a sub-set of geographic objects
generically based on a given level of detail. Here then, ‘perfect seams’ may be realised by providing
information that supports cognitively smooth transitions between varying levels of detail and varying
forms of representation, while maintaining geographic context.
Figure 1 The Event Horizon: a non-lateral ‘cut’ through a hierarchical data structure, supporting the
visualisation of both global context and local detail in a single view (Quigley, 2001)
Quigley’s ‘event horizon’, illustrated in Figure 1, addresses this challenge with a non-lateral ‘cut’
through the underlying graph, selecting a set of objects from varying levels of detail relative to a
single heterogeneous context. This work was applied more specifically to geographic information by
Mackaness et al. (2011), however questions remain around the actual structure and content of the
underlying graph, and crucially, around the logic of both the event horizon query and the ultimate
method of visualisation.
Figure 2 Discrete spaces: from the topological to the topographic. Shown here: a) Open Street Map
Public Transport routes for Edinburgh and b) Edinburgh bus and tram network schematisation
(courtesy of Transport for Edinburgh)
The application of the event horizon concept to the visualisation of geospatial information is also
linked to research such as van Oosterom et al.’s work on ‘tGAP’ (topological generalized area
partitioning) and the ‘SSC’ (space-scale cube), which represents an approach to surfacing ‘mixedscale’ geographic information both in terms of the data model and the ultimate visualisation (van
Oosterom et al., 2011).
Figure 3 Edinburgh street and public transport networks: mixed-scale and mixed-space information
(OSM/ Transport for Edinburgh)
While researchers such as van Oosterom have investigated the problem of ‘continuous generalisation’
and the representation of features from different scales in single views, we argue that to achieve
‘perfect seams’ in applications to support urban wayfinding, it is also necessary to integrate the
Cartesian metric space defined by a geographic coordinate system, with non-metric schematisations
that better reflect simplified representations of public transport networks (i.e. map views that are often
based on a normalisation of routes to either a horizontal, vertical or 45 degree position, with true
north also adjusted based on the particular layout of the city in question - see Figure’s 2 and 3).
1.3. The functional graph: formalising insights from spatial cognition
We propose the functional graph as an approach to storing a set of objects and relations that include,
in addition to the usual content of a geospatial database, explicitly defined objects and relations that
relate to the functional elements of the geography, i.e. aspects of the geography that ‘specify action’
(Freksa et al., 2005). The parameters of this additional data are bound by our use case - multimodal
public transport. Here our focus is on formalising transfer-points, turning-points and anchor-points
such that we are able to derive a heterogeneous set of objects and a geometry for the route, being
mindful of the average size and proportions of mobile devices. We then propose the event horizon as
the query logic for retrieving a sub-set of the objects and relations based on three contextual
parameters: the start location, the destination and the possible modes of travel.
Figure 4 Metric (WGS-84) multimodal journey extent relative to mobile device view
Figure 5 Basic functional graph output, visualised with polyfocal Cartesian distortion
In Figure 5 we show a mixed-scale, mixed-space view of a journey across central Edinburgh making
use of the tram network. Here polyfocal Cartesian distortion simply serves as a way to visually
demonstrate the concept as opposed to being proposed as a viable end solution for visualisation.
This research can be seen as extending Tomko and Winter’s work on describing the functional spatial
structure of urban environments (2013). While Tomko and Winter built on Lynch’s Image of the City
(1960) to create graphs based on the accessibility afforded by varying forms of urban transport, the
work presented here focuses on the likely cognitive salience of aspects of the route that can be used to
automatically derive a heterogeneous set of objects and geometries for visualisation. Drawing on
insights such as the anchor-point hypothesis (Couclelis et al., 1987), the functional graph offers an
underpinning data structure from which to ultimately surface single journey views of multimodal
routes optimised for small screen devices.
1.4. Conclusions and next steps
The paper concludes with a set of next steps centred around user testing to compare the efficacy of
our approach with established approaches, and to ascertain the optimal level of distortion in the
visualisation, for example the number of objects to be ‘drawn’ to the screen, the visual treatment of
the objects (e.g. how best to represent intermediate spaces that join different scales, while maintaining
context), and the extent to which relative distances and angles may be expanded/ collapsed/ rotated
without affecting a user’s recognition of the underlying character of the geography.
2. Acknowledgements
The authors would like to thank the Ordnance Survey and the Engineering and Physical Sciences
Research Council for supporting this research.
3. Biography
Lucas Godfrey is a PhD Researcher in the School of GeoSciences at the University of Edinburgh.
With a focus on spatial cognitive engineering and applied computational design, Lucas’s research
centres around how to model and visualise spatial information to support decision-making,
particularly in the context of complex urban transport networks.
William Mackaness is a Senior Lecturer in the School of GeoSciences at the University of Edinburgh.
William’s core expertise lies map generalisation and automated cartography, in particular, multiscale
map generalisation and geovisualisation.
4. References
Barkhuus, L. and Polichar, V.E. (2011). Empowerment through seamfulness: smart phones in
everyday life. Personal and Ubiquitous Computing. Vol. 15, pp629–639.
Couclelis, H., Gale, N., Golledge, R., & Tobler, W. (1987). Exploring the anchor-point hypothesis in
spatial cognition. Journal of Environmental Psychology 7, 99-122.
Freksa, C., Barkowsky, T., Klippel, A. and Richter, K. (2005). The Cognitive Reality of Schematic
Maps. In Meng, L, Zipf, A., Reichenbacher, T (Eds) Map-Based Mobile Services. Springer.
Jipp, M. and Ackerman, P. (2016). The Impact of Higher Levels of Automation on Performance and
Situation Awareness: A Function of Information-Processing Ability and Working- Memory
Capacity. Journal of Cognitive Engineering and Decision Making.
Mackaness, W., Tanasescu, V., & Quigley, A. (2011). Hierarchical Structures in Support of Dynamic
Presentation of Multi Resolution Geographic Information for Navigation in Urban Environments.
Proceedings of the 19th GIS Research UK Annual Conference University of Portsmouth.
Montello, D.R. (2005). Navigation. In Shah, P. and Miyake, A.(Eds.) Handbook of Visuospatial
Thinking, pp. 257–294. Cambridge University Press.
Norman, D. N. (1986). Cognitive engineering. In D. N. Norman and S. W. Draper (Eds.), Usercentered system design (pp. 31-61). Hillsdale, NJ: Lawrence Erlbaum Associates.
van Oosterom, P. and Meijees, M. (2011). True vario-scale maps that support smooth-zoom. 14th
ICA/ISPRS Workshop on Generalisation and Multiple Representation, Paris.
Quigley, A. (2001). Large Scale Relational Information Visualization, Clustering and Abstraction
[online]
Available at:
https://aquigley.host.cs.st-andrews.ac.uk/aquigley-thesis-mar-02.pdf
Raubal, M. (2009). Cognitive Engineering for Geographic Information Science. Geography Compass
3/3, pp1087-1104.
Richter, K.F., Tomko, M. and Coltekin, A. (2015). Are we there yet? spatial cognitive engineering for
situated human-computer interaction. In: Bertel, S., Kiefer, P., Klippel, A., Scheider, S., Thrash, T.
(Eds.), CESIP Workshop, Cognitive engineering for spatial information processes: From user
interfaces to model-driven design at the Conference on Spatial Information Theory XII (COSIT).
Skagerlund, K., Kirsh ,D. and Dahlbäck, N. (2012). Maps in the Head and Maps in the Hand.
Proceedings of the 34th Annual Cognitive Science Society.
van Oosterom, P. and Meijers, M. (2011). True vario-scale maps that support smooth-zoom. 14th
ICA/ISPRS Workshop on Generalisation and Multiple Representation, Paris.
Weiser, M. (1994). Building Invisible Interfaces. Keynote talk, Proc. ACM UIST.