Tangible Objects: Connecting Informational and Physical space

Tangible Objects: Connecting
Informational and Physical space
Peter Bøgh Andersen
Department of computer science,
Aalborg University
1
Palle Nowack
Department of computer science,
Aalborg University
Maersk Institute,
University of Southern Denmark.
Introduction: Where do you want to go today?
Information technology has changed the role space and time plays in our
lives. The Internet connects people that are geographically far apart, Timbuktu is only a mouse click away, virtual reality replaces the real space with a
life-like simulacrum, and augmented reality smears an informational coating
over real space.
As many writers have noticed, the relation between our representations and
the physical reality we live in is changing fundamentally. In this paper, we
shall try to describe this change by means of three dimensions: conceptual,
physical and informational dimensions.
Our main example will be a train station, so we start by observing the station from our three perspectives. From the conceptual perspective we all have
an informal idea about the artefacts, events and activities in play at the station. From the physical perspective actual trains are entering and leaving certain platforms to certain times (delayed and on-schedule). The informational
perspective is given by the signs, screens, timetables, etc. that are available to
passengers and staff; information about the physical events. The conceptual
perspective ties together the available information and the physical phenomena and facilitates understanding and planning of our activities.
Currently the boundaries between the perspectives are fixed according to
the available technology, but in the near future, we will see much more personalized and mobile applications of the same information provision. That is
the world of pervasive computing. Pervasive computing is a variant of augmented reality. A computer (informational) model of the station has been
constructed and the idea is to superimpose a view of this model over the
mental eyeglasses of the passenger, so that he can “see” the phenomena of the
station from another more informative perspective. The major thing required
is that the system displaying the informational model “knows” the location
and orientation of the passenger and the various trains, coaches, platforms etc.
Peter Bøgh Andersen & Palle Nowack
2
What happens is very simple: we have two spatial coordinate systems, one
delineating the real space containing trains, platforms etc., and the other one
holding the computer model of the station. The user’s real position in the first
coordinate system is fed into the computer, and the station is displayed to the
user as it would have looked, had the model been located in the user’s real
coordinate system.
Thus, in augmented reality, a computer model is displayed on top of our
view of reality, as it would have looked, had it really been at the chosen location. A classical example of augmented reality is the technique of projecting
the hidden wiring of a machinery onto the spectacles of the repair mechanic
as they would appear to him, had they been visible. Virtual reality uses the
same technique of aligning two coordinate systems; only in this case the
model is displayed, as it would have looked to the user had he been located in
the model coordinate system.
From the point of view of software engineering, the new thing is that the
physical relations between the model, its reference and the user are systematically exploited.
This paper presents an analysis of these phenomena, and makes two points:
(1)
(2)
We suggest various additions and refinements to the modelling techniques of software engineering in order to cope with the changed relationship between representations and physical space. In our approach
we apply the conceptual framework associated with tangible objects as
described by (May et al. 2001).
We pinpoint the kind of change computer based signs will undergo as a
consequence of this new technology: a whole new set of indexical signs
will emerge.
However, before we do that, we will present existing everyday examples of
the same interaction between representation and physical space to get a first
shot at possible solutions.
2
Indexical signs
As indicated by the introduction, virtual and augmented reality implies a
change of the relation between space/time, interpretations and representations. In this section we shall start to characterize this change by discussing
existing examples of the phenomenon. We shall use semiotics as a frame of
reference.
Representations or signs are physical objects or processes (representamens)
that are used in a special way: they are taken to stand for something other
than themselves (their object), and they cause a reaction — an interpretant
Tangible objects
3
that relates the representamen to its object— in the perceiver. This reaction
can be the production of a new sign (e.g. a verbal comment) but it can also
consist in performing an action or refraining from performing an action, or
forming an abstract concept or rule.
The relation between object and representamen can be of three types:
iconic (the representamen is similar to the object), indexical (the representamen and object inter into a causal relation) or symbolic (the representamen
is related to its object by pure convention).
The signs we are interested in contain strong indexical elements because of
the increased interaction between the model and its surrounding physical
space.
In some traditional indexes, the representamen is attached permanently to a
physical location; its object is determined by its location whereas its interpretant is determined by convention. In Fig. 2.1, the representamen is interpreted as an assertion, “This place is called Århus”, but the object referred to
is determined by the physical location of the sign. If located in Aarhus Central Station, the proposition is true, but if the sign is moved to Hamburg
Hauptbahnhof, the proposition expressed by the self-same sign is false.
Fig. 2.1. Naming. “Aarhus H” = “This place is called “Århus”.
Peter Bøgh Andersen & Palle Nowack
4
Interpretant
Representamen
Object
Fig. 2.2. Analysis of indexical sign
Interpretants are of different types; for example, the sign “Struer” in Fig. 2.3
should be taken not as a statement but as a promise: “This train will be going
to Struer”. But again the object the railroad company promises to take to
Struer, i.e. the train, is determined by the location of the sign: the object is the
train the sign is attached to.
Fig. 2.3. Promising. “Struer” = “This train will be going to Struer”.
Finally, the sign “crossing prohibited” in Fig. 2.4 is a prohibition “You are
forbidden to cross here” where the location part of the sentence, here, is determined by the location of the sign.
Tangible objects
5
Fig. 2.4. Regulating and controlling. “Crossing prohibited” = “You are forbidden to cross
here”. Physical and symbolic control.
Signs can not only be attached to locations but also to times. Fig 2.5 is taken
as a promise that “At this time trains will be leaving for Struer from these
platforms” where the time is fixed by the real time and the location is determined by the placement of the monitors. Thus, the physical space we are interested in is really the space/time continuum.
Fig. 2.5. Attaching signs to time. “11.18, Struer Regionaltog…” = “At this time trains are
leaving for Struer from these platforms”.
Peter Bøgh Andersen & Palle Nowack
6
Thus, there is nothing new in exploiting the physical relation between object
and representamen. We have always decorated our surroundings with representamens relevant to particular locations.
The new thing is that this relationship is no longer static but dynamic. Mobile technology can sense its own location, and, given information about other
objects at this location, it can present representamens of these objects to the
user.
Interpretant (promises, prohibitions, etc.)
Object (trains)
Representamen (monitors)
Time and space
Fig. 2.6. Indexical signs.
In static signs like Fig. 2.2, the representamen so to speak incorporates part of
the object, since the interpretation is based on the co-occurrence of both of
them.
In dynamic signs like Fig. 2.5, the representamen has access to its changing
spatial and temporal context and can adapt accordingly, exchanging one object for another, dependent upon the time and its location. Furthermore, some
objects are able to physically change their representamen. These three causal
paths are shown in Fig. 2.6.
Some dynamic indexes only incorporates a causal interaction between object and representamen. This is the case in process control. For example, the
rudder angle display on a ship bridge can describe the rudder angle because a
sensor attached to the rudder keeps sending signals to the display (Fig. 2.7).
Conversely, the steering wheel (representamen) can change the rudder angle
(the object denoted by the wheel position).
Tangible objects
7
Interpretant (assertion
about rudder angle)
Object (rudder angle)
Representamen
(rudder angle dislay)
Fig. 2.7. The rudder angle influences the rudder angle display
In the next section we shall look at central concepts from systems design and
discuss how they can be changed in order to cope with the changed relationship between representamen and physical context.
3
Models and domains
Most software development methodologies (e.g. OOA&D as described by
Mathiassen et al., 2001) emphasize the importance of software models; we
construct models of the software implementation, the usage context, the
problem domain, and sometimes even of the development context, before we
embark on the actual programming. Models are signs denoting various domains; they support our understanding, and they can be applied as a communicative vehicle for controlling the software development processes. In Fig.
3.1 (adapted from Jacobson et al, 1999) we illustrate the role of modelling in
a software development process. For the case of simplicity, we have chosen
to focus on two of the human roles involved in a software development effort,
the user and the developer. Other examples could include the customer/client,
the system maintainer, the system administrator, etc. (we refer to Jacobsen et
al. 1998 for a more in-depth discussion).
In the following we explain and exemplify four different domains one
would need to model in software development. As an example we consider
the task of developing an information system for a railway station as introduced in Section 2.
Peter Bøgh Andersen & Palle Nowack
PD
Model
Problem
Domain
DD
Model
AD
Model
System
User
Application
Domain
8
SD
Model
Developer
Development Domain
Solution
Domain
Fig. 3.1. Models and domains
Starting from the left side of the illustration, a user (or more precisely a user
organization) is supposed to interact in a certain way with the system to be
constructed; we term this the application domain. In the example the user is
typically a passenger, although the station staff also could be users of (parts)
of the system. As a part of the analysis effort, it is common to construct models denoting the users’ interactions with the system, e.g. by describing usecases (cf. RUP, Jacobson et al., 1999). An example would be a passenger
walking up to the signpost (the screen) in Figure 2.5 looking for information
about trains leaving from this particular platform. He checks the screen in order to make sure that the destination (Struer) and the departure time (11.18) is
relevant for his demands. Another example would be a passenger sitting in a
train entering the station. From his window he can see the station as depicted
in Figure 2.1. He checks whether he has reached his destination, and he also
has the possibility of checking the current time. Hence, a use case would describe how the passenger interacts with the station information system.
The problem domain is the part of the real world the system is supposed to
control, administer, regulate, etc. The problem domain can be modelled with
special signs, such as entity-relations diagrams or object-oriented class diagrams. Because we want to communicate with the user about the correctness
of our models, it is important that they reflect the user’s own understanding
(e.g. use his/her vocabulary and conceptual relations). Hence, the models of
the domains are depicted as “the way the user think” about the domains. In
the example the objects in the problem domain would comprise different
types of trains (regional, intercity, or express trains), the current time, departure times, arrival times, platforms, tracks, destinations, different types of
coaches (standard, business, rest/silent, smoking, non-smoking).
Tangible objects
9
Similarly, we can focus on the role of the developer, and identify the activities the developer is engaged in, and his perspective on the system to be
constructed. The developer interacts with the system in the development domain. Typically, software development methodologies model exactly this
domain: activities (requirements elicitation, analysis, design, implementation,
test, deployment), mental category of activity (construction, understanding,
change, reuse), modelling tools (e.g. UML). The software development approach chosen for this specific project together with the developer’s previous
work experience reflects the conceptual framework for the development project. Typically, the development domain is not modelled explicitly because
the education and experience of the developers are considered to be sufficiently coherent to facilitate a manageable process. However, when dealing
with changing staff, maintenance, or further development (additions and/or
changes) of legacy systems, the importance of having an explicit development
domain model becomes significant in order to ensure the overall big picture
and coherence in the project.
The concepts the developer uses to think about the system to be developed
are in what we call the solution domain. Examples include architectures,
frameworks, patterns, components, libraries, programs, procedures, variables,
if-statements, etc. The models of the solution domain are what we usually call
the software design. Hence in the example the solution domain would comprise the design (the model of the implementation) of the railway station information system.
4
Models and referent systems
In the following we outline object-oriented modelling , which is a crucial part
of any object-oriented software engineering methodology. The modelling tradition is closely associated with the so-called Scandinavian school of objectorientation, cf. DELTA (Holbæk-Hansen et al., 1975) and BETA (Kristensen
et al., 1983).
One normally distinguishes between a referent system (RS) and a model
system (MS). This distinction is a result of applying a perspective; we select
what to consider in the referent system and what to consider in the model
system. Any system can be either a referent system or a model system: it is a
role played by the system. The purpose of modelling is to get an understanding of the referent system through the application and examination of the
model system. The model system exposes certain properties of the referent
system. A referent system typically is implicit, whereas a model is explicit.
Peter Bøgh Andersen & Palle Nowack
10
A software development project includes several such modelling activities,
cf. the domains described in the previous section: problem domain, application domain, development domain, and solution domain. For each domain we
identify what to consider and what not to consider as being part of the system
to be modelled.
Concepts
Descriptions
Modeling
Specification
Abstraction
Interpreting
Phenomena
Referent System
Representation
Model System
Fig. 4.1. Referent and model system
A fundamental human role is the modeller: the person that defines and applies
the modelling perspective. This implies selecting the referent system and the
model system, and selecting the relation between them. Typically in a large
project, it is not the case that the person constructing the actual model is the
only person interpreting the model. The models created must yield the “correct” understanding, when applied to interpret the referent system, i.e. the
modeller role can be divided into the model-constructer and the modelinterpreter. In large system development projects, the systems analyst will
build the object-oriented model of the problem domain, and then he will hand
over these models to the system designers, who will design the solution.
However for the case of simplicity, these roles are often described as a single
role.
As part of the perspective applied when observing a system from a referent
point-of-view, we consider phenomena and concepts in the referent system.
Phenomena can be classified/categorized into concepts, and concepts can be
exemplified by phenomena. Concepts can be related in whole-part hierarchies: a concept can be decomposed into a set of simpler concepts, and a set
of concepts can be aggregated to form a new concept. Concepts can also be
related in is-a hierarchies: a concept can be a specialization of another concept, and a concept can be a generalization of another concept
Another fundamental human role is thus “the abstracter”: the person that
applies the concept formation processes generalization/specialization and de-
Tangible objects
11
composition/aggregation, as well as the abstraction processes of exemplification and classification.
In the model system we describe concepts by means of classes and we represent phenomena by objects. When a number of phenomena in the referent
system can be classified into a concept, the corresponding class can be applied to generate the corresponding objects.
The third fundamental human role is “the generator“: the person or tool that
generates objects based on the class descriptions. The generator also manipulates (combines/decomposes) class descriptions to form new class descriptions. The generator can also reverse-generate by forming class descriptions
based on objects.
As examples we consider the problem and solution domains.
When doing analysis we try to identify phenomena and concepts from the
users’ point of view. In the example we could e.g. construct the (not very
clever) model of the problem domain shown in Fig. 4.2.
Location
Station
Train
Platform
Event
Coach
Departure
Arrival
Fig.4.2. Classes and associations in the railway model written in UML notation. The arrows have the following standardized meaning: large arrows denote a generalization relationship, where the specialized concepts points to the generalized concept; small arrows denote concept associations (other than generalization and aggregation).
When doing design we try to identify the required concepts, and create corresponding phenomena. As part of an object-oriented design, we almost always
carry over a modified model of the problem domain into the implementation
(typically termed the “the model component”). For example, we could choose
to implement the Train and Events phenomena as active objects (simple processes or perhaps agents), and we could choose to create the location hierarchy
as a relational database, whereas we would design the Coach phenomena to
be represented in the system as a traditional (non-active) object. More importantly, we design the patterns of interactions between the different representamens (objects, components, agents, processes, database records), and we
Peter Bøgh Andersen & Palle Nowack
12
organize the design of the system to yield properties such as maintainability,
flexibility, and reusability.
Referent systems can be found in different types of domains. We can distinguish between focus domains and activity domains (generalized from
OOA&D, Mathiassen et al., 2001). The two previous examples (analysis and
design of the train system) are examples of focus domain models.
Focus
Domain
Model
Focus
Domain
Activity
Domain
Model
Software
System
ActivityDomain
Fig. 4.3. Focus and activity domains
When modelling the focus domain we focus on the conceptual framework of
the referent system (e.g. the problem domain or the solution domain). When
modelling the activity domain (e.g. the application domain and the development domain) we focus on the human processes associated with the focus
domain: how do we as humans interact with the physical representamen (the
computerized model) of the focus domain?
Every referent system is only a single perspective on the real/perceived
world. The modelling perspective guides this selection process.
Let us now apply the semiotic framework introduced in Section 2 to the
modelling concepts above.
The software engineering concept of a model is clearly a representamen,
something tangible used to refer to something else. In one interpretation, the
referent system is a conceptual phenomenon, i.e. it is the particular way we
choose to relate the model to reality: the selections we make, the perspective
we apply, and the concepts we form. Thus, the referent system is the interpretant of the model. If the model is an UML diagram, such as Fig. 4.2, the
interpretant consists of the UML concepts, such as class, object, association,
etc. If the model is a diagram of a piece of software, then the interpretant
could be design patterns, and the object the actual existing code. As mentioned in the beginning of this section, the role of representamen (model) and
interpretant (referent system) are roles that can be played by any phenome-
Tangible objects
13
non. For example, a computer system can be the object of a design pattern
model, but itself be a representamen (a model) referring to a train station.
5
Suggestions for enhancements
The tradition described above has two problems in dealing with the new technology. Both of them relate to an unclear relation to physical reality.
1.
2.
It is not clear whether the referent system is a physical object or a mental construct. The occurrence of concepts in the referent system indicates the latter interpretation if we believe that concepts are not “out
there”, whereas the term phenomena supports the former interpretation.
But these two interpretations are incompatible — how can physical
phenomena and concepts form a system? However, “phenomena” can
also mean “our perception of the things ‘out there’”, but in this case the
physical world is not represented in the model at all.
The location of the model does not play an important role in the framework. The choice of the device on which to locate the model is mainly
seen as an implementation issue concerned with efficiency and stability.
In particular, change of the location of model is outside the framework.
These two features of the tradition constitute a major problem since the new
technology is characterized by systematically exploiting the physical relation
between the referent and the model, and between the model and its location.
In Fig. 5.1 we have added a physical object to the referent system and the
model, we have connected both object and model to the fabric of time and
space, so that both know their own location and the time, and we have added
the possibility of object and representamen physically influencing one another
(compare to Fig. 2.6).
Interpretant (refrerent system)
Object (physical referent)
Representamen (model)
Time and space
Fig. 5.1. Referent and model system with indexical properties added.
Peter Bøgh Andersen & Palle Nowack
14
These amendments are necessary in order to cope with the new situation (this
situation has been normal for many years in process control where computer
models can sense the controlled plant via sensors and change it via actuators).
However, the diagram can be generalized to acquire even more explanatory
power. Adding time and space does make the model sensitive to the physical
space in which it is located, but there are other contexts that are just as relevant for understanding current changes in technology.
One of these can be called the informational space. It consists of all logical
pathways that connect models and which allow them to interact. Models are
not only able to move physically because the devices they reside on moves,
they can also “move” in the informational space from one device to another.
The most clear example is mobile agents (White 1996). The user creates a
small piece of software and instructs it to gather information or perform tasks
of a certain kind. The agent is frozen (its current state variables are recorded)
and it is sent out in the world to do its job.
In the proposal for Mobile Agents, a special habitat at the destination end is
necessary. The Mobile Agent enters this habitat, comes to life, and begins
executing.
If the information is not found at the current place, the agent can travel on
to other places — or, if the tasks is deemed large, it can spawn children and
send them out in different directions in order to perform searches in parallel.
Finally, it is not only the model and the physical object that interact with a
contextual space, this is also true of the conceptual part of the triangle, the
interpretant. It too interacts with a larger social context of related concepts.
In summary, the complete model will contain representamen, object and
interpretant each of which can interact with physical, informational, and conceptual contexts and is able to move in the corresponding spaces (Fig. 5.2).
Thus, a model can move both in physical space, because its device moves,
and in informational space via the associations it contracts with other models.
Based on this speculative approach of trying to combine object-oriented modelling with computer semiotics, we end up with a model that resembles the
model of tangible objects described in May et al., 2001.
Tangible objects
15
Conceptual context
Interpretant (Referent system)
Representamen
(model)
Object (physical
referent)
Informational context
Physical context
Fig. 5.2. Basic model of signs and their context
6
Self-reference
A third new feature is inherent in the notion of embedded software, i.e. software that is embedded in a physical object it controls. Normally we assume
that the model is disjunct from its reference system — the model so to speak
stands at a distance and represents its object. In embedded technology, the
model may be located inside the object it denotes1. This means that some embedded systems tend to be self-referential since they are a part of the objects
they represent (Fig. 6.1).
Fig. 6.1. Magellan 750M Mobile navigation system denoting its own position. “I am
here”.
1
The observation is due to Bent Bruun Kristensen.
Peter Bøgh Andersen & Palle Nowack
16
Although this creates a rather abnormal relation between model and reference
system, seen from an OOA point of view, the phenomenon is well-known in
process-control. For example, the rate of turn indicator on a ship bridge signifies the speed with which the ship, and thereby the indicator itself, turns, and
the radar shows the position of the ship, including the radar itself. Since the
model-representamen represents its object, and since the object includes the
model, it follows that the model must represent itself either explicitly or implicitly.
The kind of mobile technology subsumed under the heading “tangible objects” may also invite self-reference. Since, by definition, tangible objects
enter into changing informational, physical and social spaces, and must be
able to adapt their behavior meaningfully to the changing contexts, they need
to be able to distinguish representamens of their environment from representamens of their internal state. Dynamic evolving systems, such as wwwservers, already have to be aware of and analyze their informational environment, e.g. which helpers and plug-ins are currently available on the machine
they are running on. With tangible objects, the ability to analyze the physical
and social environment is added. This means that the system-environment
distinction gains importance (Luhmann 1984). The system must differentiate
between itself and the environment: where am I?
7
Using the Model: Associations and Habitats
In the following sections we shall show how to use the model defined above.
The scenario lies in the future, although possibly not very distant.
7.1 Scenario
As Joe wakes up Tuesday morning, his alarm device informs him that the
train towards Struer is planned to run as scheduled. The night before he used
his wireless PDA to request a notification of this information (the representamen moves from Timetable System to the PDA. The PDA is sensitive to the
temporal context and holds a model of Joe’s travel schedule).
As he showers he thinks about the upcoming meeting in Struer, and when
cooking his breakfast he activates the memo-recording system of the house
and records a number of voice messages that clarify points in his presentation
(the representamen moves from the sound medium to the recording system).
The house alarm system notifies him that the city bus to the train station
will be arriving 5 minutes later than scheduled. He has an extra doughnut,
selects a different tie, and as he exits the building he is notified that he forgot
Tangible objects
17
his PDA. He returns and picks it up, and as he exits again, the interpreted
voice memos are downloaded as text to the PDA (bus information moves to
the house alarm that is sensitive to the temporal context; the House System is
sensitive to the spatial location of PDA and Joe; the memo system is sensitive
to the location of the PDA).
On the bus he looks over the slides for his presentation and makes a few
corrections according to the memos from the shower. The PDA beeps as they
approach the station; time to get off (the PDA is sensitive to its location).
As Joe enters the station through the main entrance, he slips his PDA into
the pocket. Don’t want to fall over peoples’ legs as he did the last time he
tried to read the morning paper while running to the train.
However, the PDA is still working, and as he approaches a large screen
with departure information, the display changes, and the train for Struer (for
which he has a reservation) is displayed together with other departures (the
display is sensitive to the physical location of the PDA and creates an informational pathway to it when it is close; it adapts its display to the travel information it receives from the PDA).
He gets the PDA out of the pocket, and a small map showing his current location, the departure platform, and a path for getting there is displayed. In the
lower right corner a timer is counting downwards: he still has 8 minutes to get
there (the PDA is sensitive to the social significance of its physical location
and the actions normally associated to this space and time).
Down with the PDA again and he walks towards the train. At one point he
takes the wrong direction, and the PDA notices it and displays the correct
way. He ignores it, as he just wanted to get another coffee to go on the train.
Finally he reaches the platform, and the PDA now shows him where his
carriage will be located when the train arrives (same as before).
He waits, the train arrives, and as he enters, the ticket information is verified with the local train system (the ticket system is sensitive to the physical
location of the PDA and creates an informational pathway to it).
As he finds his seat (on his own!) the PDA is updated with a map of the
train and a city map of Struer. Joe falls asleep.
7.2 Associations
In the scenario we have assumed the following:
•
Models and Referent systems: the PDA holds models of the travel
schedule. The Time Table System and the PDA hold models of the social use of specific locations.
Peter Bøgh Andersen & Palle Nowack
•
•
•
18
Physical context: the PDA is sensitive to its temporal and spatial location, the House System to the temporal context.
Object: The House System is sensitive to the physical location of the
PDA and Joe. The Memo System is sensitive to the location of the
PDA. The Railroad Display and Ticket System are sensitive to proximate PDAs.
Informational context: the following pathways are created: PDA –
Timetable System, House Alarm System – Bus System, Memo-System
– PDA, Display System – PDA, Ticket System – PDA.
Fig. 7.1 summarizes the associations in physical and informational space assumed by the scenario.
Memo
System
Time table
system
Memo
system
Display
System
House
System
PDA
Joe
Bus
System
Ticket
System
Space
Time
Fig. 7.1. Summary of physical and informational associations in the scenario. Full arrows: physical associations, dashed arrows: informational associations.
We see that we can have physical associations alone, as when the House
System discovers that Joe but not the PDA is leaving the house. We can also
have informational associations alone, as when the House System accesses
the Bus System and warns Joe that the bus is late. And we can have both at
the same time, where the Display and Ticket Systems sense the presence of a
PDA and request travel information from it.
7.3 Habitats
We have also seen that some of the devices must hold a model of activities
appropriate to particular locations and times. For example, the Display Sys-
Tangible objects
19
tem must know that when travellers are located in the Station Hall, they are
probably on the way to catch a train and therefore may need information
about arrival and departure times. Furthermore, the PDA too must contain a
model of what happens in Station Halls, otherwise it would not be able to
choose to display a map with a path to the correct platform.
A space is thus more than just GPS information about latitude and longitude. Similarly, sensible use of time coordinates requires a model of what
takes place in that period of time, e.g. a model of the travel schedule.
Adopting the terminology of tangible objects (May et al., 2001) we shall
call such units habitats. A habitat is thus a segment of space/time that has associated to it a set of socially standardized action possibilities. A model of a
habitat must specify the spatio-temporal coordinates and the associated action
potentials of the habitat.
Non-computerized habitats are already a part of our everyday life. Most of
the spaces that surround us are designed for a small set of actions, and exclude many others: dining rooms are for eating, bedrooms for sleeping, and
kitchens for cooking.
In a railroad station signs control the movements and activities of the passengers. Fig. 7.2. requires passenger to stand alone to the right, Fig. 7.3 defines the space as an eating place, and the signs in Fig. 7.4 distinguish between toilets (to the left) and the ticket counter (to the right).
Fig. 7.2. Sign post located above escalator. Only single persons standing to the right are
allowed.
Thus, habitats are really signs that use architecture and signposts to denote
specific activities and where the interpretant consists in performing exactly
those activities.
According to our general model of tangible objects, habitats can be computerized and in this capacity contain an executable model of its activities.
Furthermore, it can be sensitive to the objects denoted by this model: people
and devices, and it can sense the time to differentiate between actions appropriate for some times but not others.
Habitats could be seen as a “docking” facility for movable devices. What
actually will happen depends upon the collection of models residing in the
habitat and those brought along with the movable device. If the habitat offers
Peter Bøgh Andersen & Palle Nowack
20
a service the device does not support, this service is disabled, and similarly
with services allowed by the PDA but not supported by the habitat.
Fig. 7.3. This place is for eating
Fig. 7.4. This place is for buying tickets
We have the same situation in non-computerized habitats. On the one hand,
my desire and ability to eat cannot be satisfied at the ticket counter: here only
ticket-selling and information giving can take place. On the other hand, selling tickets requires me to able to buy them which I cannot do if I have forgot
my wallet.
With the words of Lind(2000) one can say that I bring with me a set of capabilities and the habitat offers a set of opportunities. Only those capabilities
that are supported by corresponding opportunities can be realized.
The railroad station hall is a habitat where the supported actions include
passengers moving to the right platform at the right time. Therefore Joe’s
PDA was able to download map information of this kind and display it to
him. There is a match between the capabilities of the PDA and opportunities
offered by the railroad station.
Joe’s PDA also has capabilities to display his ticket to the correct authorities, but the hall does not hold corresponding opportunities so ticket control
does not happen here. However, the habitat at the platform where passengers
are supposed to embark and disembark trains does offer a facility for inspecting Joe’s PDA when triggered by spatial contiguity.
Tangible objects
21
8
The conceptual context
What does it mean for a tangible object to interact with its physical, informational, and conceptual context?
The first two ones are easy: a tangible object interacts with its physical
context if it can receive and process information about the physical properties
of the context, e.g. spatial properties like position. A tangible object interacts
with its informational context when it communicates properly with information services. But what does it mean that it interacts with its conceptual context, and what is, by the way, a conceptual context?
The first thing to note is that the object does not interact with concepts at
all. It is the user that does so. The conceptual part of the tangible object is the
interpretant, so the context must contain other Interpretants with which it can
interact. This again means that habitats must be signs. This is clearly the case
with Fig. 7.3 and 7.4. In both cases, posters, sign-posts and decoration can be
conventionally interpreted as assertions that here the passenger can get
something to eat or buy tickets.
Conceptual interaction thus means interaction between the interpretation
assigned to the habitat and that assigned to the tangible object. If Fig. 7.3 says
“Eating is possible here” the appropriate response of the user is to buy a hotdog but he can also just sit down at a table. If Fig. 7.3 says “Eating is mandatory here”, the latter is not allowed without buying something.
Thus, the proper functioning of the tangible object requires the habitats to
be interpretable signs too. If the platform is a habitat for ticket control, then of
course it must sense the physical presence of the PDA and it must be able to
establish informational contact to it. But, in addition, it must present information to the user that it is in fact a place for ticket control, since otherwise the
actions of his PDA would be unintelligible to him.
9
Conclusion
Let us now return to the OOA/OOD methodology from Section 3-4. In these
system development methods, the accessibility space and the physical space
were two unrelated entities. The accessibility space, for example, is used to
design object-oriented systems: which objects have access to which other
objects? In which ways do they interact? Physical space, on the other hand, is
a matter of implementation: on which machines should which objects reside?
In principle and according to theory, informational space should be defined
independently of physical space (although practice shows another picture).
The new thing is that physical space acquires significance for informational
space. Physical space is no longer merely a means for realizing informational
Peter Bøgh Andersen & Palle Nowack
22
space, but determines in a significant way which paths should exist in the informational space. The connection between the Ticket System and the PDA
was triggered by physical contiguity and the download from the Memo System to the PDS was triggered by the PDA leaving the house. The map shown
at Joe’s PDA in the railroad station is downloaded from the Railroad System
qua habitat for passengers looking for the correct train, but the connection
was only opened by Joe being physically present in the habitat.
One way of capturing this changed role of physical space in relation to accessibility space is to say that physical space has become part of the representamen that produces the interpretation, in the same way as the physical location of the signpost “Århus Central Station” in Fig. 2.2 contribute to the
interpretation of the letters on the signpost: one location makes the statement
of the signpost true, another one makes it false.
Consider again Joe’s leaving his house without his PDA and the House
System warning him. The warning sound only makes sense because Joe is
simultaneously performing a physical movement that may land ham in an undesirable situation if no warning is issued. It is the combination of Joe’s
physical movement and the symbolic warning that makes sense. Had the
warning sounded when Joe was watching television, he would had considered
it an error.
Thus, the revolution in computer technology we may see in the next years
can be summed up in one sentence:
•
Physical space begins to contribute to the interpretation of informational space
Or even shorter with a concept from Section 2:
•
Computer systems become indexical signs.
10 Acknowledgements
Thanks to Daniel May and Bent Bruun Kristensen for ideas and exciting discussions.
11 References
Holbæk-Hansen, E., P. Håndlykken, K. Nygaard (1975). System Description
and the Delta Language. Norwegian Computing Center, Publ. No. 523.
Jacobsen, E. E., B. B. Kristensen, & P. Nowack (1998) Models, Domains,
and Abstractions in Software Development. Proceedings of International
Tangible objects
23
Conference on Technology of Object-Oriented Languages and Systems
(TOOLS Asia’98), Bejing, China,.
Jacobson, I., G. Booch, J. Rumbaugh (1999). The Unified Development Process. Addison-Wesley.
Kristensen, B.B., O.L. Madsen, B. Møller Pedersen, K. Nygaard (1983). Abstraction Mechanisms in the BETA Programming Language. In Proc. 10th
ACM Symposium. Principles of Programming Languages.
Lind, M. (2000). Actions, functions and failures in dynamic environments.
Center
for
Human
Machine
Interaction.
CHMI-8-00.
http://www.cs.auc.dk/~pba/ReportList
Luhmann, N. (1984). Soziale Systeme. Frankfurt am Main: Suhrkamp.
Mathiassen, L., A. Munk-Madsen, P. A. Nielsen, J. Stage (2001). ObjectOriented Analysis & Design. Marko.
May, D.C., B. B. Kristensen & P. Nowack (2001). Tangible Objects - Modeling in Style. Technical Report R-01-5004 (ISSN 1601-0590), Aalborg
University.
White,
J.
(1996).
Mobile
Agents
White
Paper.
http://www.genmagic.com/agents/Whitepaper/whitepaper.html