Communicative patterns in maritime operations

Semiotic models of algorithmic signs
1
Introduction
This paper reviews various models of computer based signs, and concludes by suggesting a
model that encompasses most, but not all, of the insights of the previous models. The review
is not exhaustive; it only includes papers and books I am acquainted with and understand.
Parts of this paper were written collaboratively with Frieder Nake. I no longer remember
who wrote what. It is to become a part of a textbook we hope to finish when we retire or
some other miracle happens. Sections 3.2 – 5 are my own contributions and should not pollute Frieder’s otherwise respectable academic career.
2
Basic concepts
A sign is something we use to stand for something else, and a computer based sign is a sign
where (parts of) a computer system is used as the sign-vehicle. The motivation for viewing
computer systems as sign-vehicles is the simple fact that most computer systems are used to
represent something else than themselves. This function has become more pronounced in the
90ies with the advent of the internet and multimedia systems, but has always existed.
2.1
A little history
Semiotics is the science that studies signs and their function and use in society. Although
semiotics was never part of mainstream computer science, it has slowly gained popularity in
the last decade, and communities have begun to organise themselves. In 1996 a seminar was
arranged by Frieder Nake and others at Dagstuhl in Germany with the purpose of bringing
together people working with semiotics and informatics. Regular meetings termed “Organisational Semiotics Workshop” were begun in 1995 and the sixth workshop was held in Reading
in 2003. Other groups exists under other names, for example Semiotic Engineering and Computational Semiotics. The Language Action Perspective group is a close relative, since it
views computer systems as a means of communication.
However, neither of these traditions are part of mainstream computer science and a possible reason for this is simply that only in the last decades, with the advent of WWW and multimedia technology, have computer systems begun to display unmistakable characteristics of
being media. Up till the 80ies computers were mostly viewed as machines (number crunchers) or tools. Although many computer scientists early realised that computers were very special machines and tools, being controlled by means of representations and producing representations as output, the analogy to other media was difficult to see at this time. When the
media applications began to flourish, a need for dealing systematically with this function
emerged, and semiotics was one possibility.
As a meta-science, semiotics is not a homogeneous discipline. However, one can distinguish between two main traditions, the American (C. S. Peirce, USA, 1839-1914) and the
European one (Ferdinand de Saussure, Switzerland, 1857-1913). What unites the two tradi-
2
tions is the concept of signs. Although it is not possible to give a unique terminological
framework, it is safe to say that signs are relational entities: a sign is something that stands
for something else.
2.2
Two sign concepts
However, going further into detail, the differences of the two traditions prevail. Whereas the
European tradition emphasised signs as a social system, the American tradition focused on
the (individual) use of signs. The Saussurean theory is a structural theory whereas Peirce saw
processes (called semiosis: the process of meaning-making) as the basic sign phenomenon.
A third important difference is that the European tradition was born of linguistics. Saussure
was originally motivated by his attempt to solve a descriptive problem in historical linguistics, and language has continued to be an important model in European semiotics. The
Peircean tradition developed as a philosophical inquiry, driven by the philosophical questions
of the 19th Century. In this theory, language is merely a sign type among others. The wellknown sign-classes, index, icon and symbol, come from this tradition.
Expression ("Data")
Form
Substance
Content ("Information")
States and
transformations used to
signify something
The conceptual
distinctions made by
the system
Data states and
transformations
The topic of the system
Fig.2.1. Hjelmselv’s elaboration of Saussure’s sign concept applied to computer based signs. Adapted from
Bøgh Andersen 1992.
Although the sign-concepts of both traditions share the notion of something standing for
something else, the concepts are elaborated differently. The European concept — here presented in the terms of Louis Hjelmslev (Denmark 1899-1965, Fig. 2.1) — was a social structural concept that describes the way a linguistic community articulates its world (the substance of the sign). The two sides of the sign articulate the same substance — gives it a form
— but according to two different principles. Researchers differ in the terms they use for these
principles: signifier and signified in Saussure’s terms, expression and content in Hjelmslev’s
theory. Hjelmslev further differentiates these principles of articulation into a form (distinctions) and a substance (the material in which distinctions are made). Thus, the content form
describes how signs create meaning distinctions in some domain, whereas the expression
form indicates how physical distinctions are used to signify something.
Fig. 2.1 shows a Hjemlslevian analysis of computer based signs. Content relates to information, expression to data. A clear distinction of these two categories is of great importance
to theoretical understanding of HCI, but to its practical development as well. The figure
shows how traditional computer science concepts are classified according to the semiotic
model. The formal and technical parts of information systems belong to the means of expression, whereas the system’s domain and its conceptualisation are content features. This basic
understanding recurs in Stamper’s model given below.
3
In opposition to the two sides of the European sign, the Peircean sign includes three factors: the representamen (the phenomenon used to signify something), the object (the phenomenon referred to) and the interpretant (the sign produced by some interpreter as a reaction to the perception of the sign which may stabilize as a rule for interpreting the sign). See
Fig. 2.2.
Representamen
(user interface)
Object
(the type of the system, e.g.
office system, CAD syswtem,
etc.)
Interpretant
(the conditions for use and evaluation)
Fig. 2.2. The Peircean sign concept and its application to the HCI situation. After Nadin 1988: 58
The main difference between the two sign-concepts is the extension to a threefold relation
caused by the interpretant in the Peircean version (this way of putting the difference is not the
historical path Peirce took). The triadic concept of the sign relation is very useful in HCI,
since it gives an explicit handle on the user’s Interpretation and reaction to the interface phenomena he perceives. It is the interpretant that makes the model dynamic, since interpretants
are themselves signs that engender new Representations:
(1)
Interpretant Æ Representation
Interface processes cause the user to produce new interface processes or to produce new noncomputerized signs, such as verbal comments or Interpretations or annotations on a piece of
paper.
In the remaining part of the paper I shall use the term Interpretation instead of Interpretant,
and Representation instead of Representamen. The reason is that Peirce’s homespun terms
never caught on in a broader community, and tend to make reading more difficult for noninitiates.
Both schools have had profound influence on other sciences. In the first half of the 20th
Century, the European structuralist school was taken as an ideal for many human and social
sciences, and in its latter half, Peirce’s thinking gained a considerable influence.
3
Areas of application
In the field of information technology, semiotics have been applied to two main areas, namely
interface design and information systems, although other areas have been discussed as well
(cf. the influences from Peirce in H. Zemanek’s (Zemanek 1966) and Saul Gorn’s (Gorn
1968) early discussion of programming languages).
Although semiotics has been used by several authors as an explicit theory, one may say
that mainstream HCI has used semiotic concepts more or less unconsciously. This is not unusual in computer science. In several cases computer science at some point in time borrowed
concepts from other disciplines, but soon forgot whence they came. This is evident e.g. in
4
programming language theory that borrowed concepts such as syntax, semantics, syntaxtrees, and reference from linguistics, but soon forgot the origins of the concepts.
I have claimed that computer systems are a natural subject of semiotics, since they are media that process representations, but to which degree does this assumption hold? After all,
other researchers have suggested that a computer is an automaton or a tool.
In order to answer this question, it is convenient to introduce a distinction between the
technical structure of a computer and its function. Seen as a technical structure, the computer
is and remains a machine. In fact, its technical structure has remained surprisingly stable
throughout its lifetime. But, depending upon the software it runs, it may have different functions. If it is used for complicated calculations without human interference, its function is that
of an automaton. If it offers tools that users can use interactively to produce objects, it is a
tool. And if it is mainly used for presenting and disseminating information and entertainment
it is a medium.
Its function as medium also means that it no longer only lives in the workplaces and laboratories, but enters the realm of culture which to a large extent consists in production and reproduction of meanings by means of signs. It is this transformation that brings up the semiotic stance.
As mentioned above, semiotics has been used in two main domains, interface design/aesthetics and organisational theory. In the following I review some of the models produced in these areas.
3.1
Interface design and aesthetics
Professor Mihai Nadin was among the first to apply a Peircean semiotics to analyze computer
interfaces, and probably also among the first to develop practical uses of it. His main contribution lies in the area of art and design, and is a good example of how semiotics can fruitfully
bridge the cultural gap between the evolving computer medium and aesthetic traditions
(Nadin 1995). In addition to a large bulk of theoretical work, Nadin has been active as a consultant and thus combines theory and practice. Fig. 3.1 shows an example of the interest
semiotics takes in analyzing and assessing the codes of the interface. In the example, interface signs are classified along two dimensions, artificiality (formal/natural) and “modality”
(verbal/visual).
Formal
Verbal
Visual
Natural
Fig. 3.1. Use of semiotics in interface design. From Nadin 1988: 299.
Professor Clarisse Sieckenius de Souza (Souza 1993, 1999) has published several papers arguing that semiotics can provide more general and better motivated formulations of the many
interface guidelines littering the domain of HCI. In particular, she has successfully used
semiotics to address the topic of tailorability. One of her main points is that many interface
5
signs must be analyzed as a one-way communication from designer to user. The designer creates the interface signs in order to support the user in using the system:
Semiotic Engineering was motivated by the idea of bringing interactive software designers onto the
HCI stage, as the creators of communicative artifacts of a peculiar nature. In our view, interactive
computer applications are one-shot messages, sent from system designers to system users, about how
to interact with the system in order to achieve a certain range of goals. Interaction is achieved when
users and system exchange messages in a uniquely engineered interface language. The goal of Semiotic Engineering is to characterize HCI in semiotic terms, and to provide a series of models, methods,
tools and techniques that will help us understand and build better interactive artifacts.
Semiotic engineering and research group, 15/08/03. http://www.serg.inf.puc-rio.br/research_topic.php?
Fig. 3.2 generalizes her observations a bit. It displays the system as a normal channel of
communication, where users 1 – 3 can have two-ways communication, as we know it from email. In this case the messages are only lightly processed by the computer that mainly works
as a post office. However, in other cases the messages from user 1 and 2 may be processed
before they reach user 3; for example they may be stored in a database and user 3 uses a
query-language to fetch parts of these messages, possibly combined with hundreds other messages from other users. The output user 3 gets may be quite different from what user 1 and 2
put in. For example, they input some numerical data, and user 3 receives a graph. In everyday
supermarkets we see an even more radical version: when the customer puts a carton of milk
in his shopping trolley, this causes new milk to be ordered from the dairy the next day via the
bar-code reader, the inventory control system, and the EDI-system! This is a mode of communication characteristic of the computer: there are many senders, their inputs are transformed in complicated ways, and the output’s modality and layout differ fundamentally from
the input.
In addition, Fig. 3.2 shows the designer’s role. On the one hand, his interface signs are always mixed with the other messages, and, on the other hand, the designer designs the channel
of communication itself, determining which kinds of messages may flow through it, which
transformations they undergo, and which expression they may be given.
Designer
systems design
interface signs
mes
User 1
sage
Data processing
message
message
System
User 3
User 2
Fig. 3.2. Computer systems as a channel of communication.
Professor Frieder Nake has mainly worked in the domain of interactive graphics (Nake
1994a) and has made major contributions to the development of computer art.
6
Signal process
Sign process
Fig. 3.3. Signal and sign processes.
One of his theoretical contributions has been to incorporate the mechanical aspects of computers in a semiotic framework. According to Nake 1994b, two kinds of processes, taking
place concurrently, determine the situation of HCI: a signal process inside the machine based
on causes and effects, and a sign-process among the users, based on free interaction. The two
processes have to be coupled in HCI. The last part of this paper is an attempt to place this insight of Frieder within semiotic theory.
Professor Peter Bøgh Andersen’s doctoral dissertation from 1990/1997 contains a systematic analysis of interfaces as sign-complexes. It uses classical structuralism to capture the
unique characteristics of computer-based signs, their syntax and semantics. Fig. 3.4 shows a
classification of computer-based signs from Andersen 1997.
+action
+permanent
-permanent
-action
+handling
-handling
+transient
Interactive
Actor
Object
-transient
Button
Controller
Layout
Ghost
Fig. 3.4. Typology of computer based signs. After Andersen 1997: 216.
It uses four different dimensions of classification: permanent features, transient features,
handling features, and action features. The typical interactive sign, such as a cursor or an
“avatar”, possess all four properties: they have a stable shape so they can be recognised; they
have properties that can change (e.g. their position); they can be handled by the user (e.g.
moved); and they act on other signs (e.g. open or close documents).
The classification shows how computer based signs are similar to and differ from traditional types of signs: pictures possess permanent features (“graphical composition”), movies
add transient features (“montage”), but the handling features (interface objects can be handled
by the user) and the action features (an interface object can influence another object) are
unique to the computer medium.
Andersen’s action features describe the same phenomenon as Nake’s signal-process. Nake
has suggested the name algorithmic sign as a label for this double nature of the computerbased sign: on the one hand a normal sign based on intentionality, on the other hand a mechanical process relying on causes and effects. The algorithmic sign — or the computer artefact viewed as a sign — appears as a phenomenon of a new kind: it is a description that can
perform its own contents. The algorithmic sign shares with all other signs the property of being something standing for something else and being interpreted by humans. But in addition
the algorithmic sign can become active, it can be set in motion, can run, be executed. Although based on a static description, it is realised as a dynamic process.
7
3.2
Organisational semiotics
Application of semiotics to the design of information systems for organisations was early
suggested by Professor Ronald Stamper. During the years he has collected a group of people
that combines theory development with practical work in organisations. Professor Kecheng
Liu, originally a student of Stamper, has done impressive work in realizing Stamper’s ideas,
and developing them further in original ways. He has also published the first comprehensive
book on organizational semiotics (Liu 2000).
Fig. 3.5 displays organisational information processes from a semiotic viewpoint rather
close to that depicted in Fig. 2.1.
Major level
Content
Expression
Minor level
Social world
Pragmatics
Semantics
Syntactics
Empirics
Definition
Beliefs, expectations, commitments, contracts, law, culture
Intentions, communications, conversations, negotiations
Meanings, propositions, validity, truth, signification, denotations,
Formal structure, language, logic, data, records, deduction, software, files
Pattern, variety, noise, entropy, channel capacity, redundancy, efficiency,
codes
Physical world Signals, traces, physical distinctions, hardware, component density, speed,
economics
Fig. 3.5. Organisations seen as semiosis. After Stamper 1992: 24.
Information systems and their context are described at two main levels, corresponding to the
expression and content sides of the sign. Each level is divided into 3 minor levels, yielding a
total of 6 levels. In the “inner levels” (syntax and semantics) we recognize Hjelmslev’s form
aspect, whereas the outer levels (the social and physical worlds) correspond to his substance
concept.
The higher levels presuppose the lower ones, the lower ones being means for ends defined
at the higher levels. For this reason, organizational semiotics considers it more important and
more difficult to produce a good analysis of the content level. The expression level is only
there to support the contents. Another point is that the informal part of the organization’s information system is much larger than its formal part, and that the IT system is only a small
part of the formal information system.
Fig. 3.5 is rather abstract, and in order to ease understanding I present a concrete example,
namely introduction of EDI, electronic data interchange.
EDI enables automatic ordering, buying, and payment. Before anything else, we need to
install hardware and connect it by means of nets through which physical signals can travel
(physical world). After we have succeeded in making signals travel from one computer to another, we must ensure reliable transmission, for example removing the negative effects of
noise by means of redundancy (empirics). Having ensured reliable transmission of identifiable signals, we must fix a protocol of communication between computers, that is: we must
design the formal rules of communication (syntactics).
Next we must agree on the meaning of the messages (semantics): e.g. what does it mean to
sell or buy something? What is a customer? What is a payment? When we can correctly send
an order from one machine to another and know what it means, the next task is to align the
working procedures of the buyer and seller to the new electronic market (pragmatics). And
finally, we are curious to learn what changes in our organisations will follow. For example,
will the closer co-operation cause employees to feel that they are really working in one, not
two, organisations (social world).
8
Professor René Jorna and Barend van Heusden (Jorna, Van Heusden, Posner 1993) have
addressed organisations and their usage of signs under two main headings: decision support,
and creation and conversions of knowledge in organisations. The latter research interest aims
at empirically determining factors that make organisations more innovative.
Like Stamper, they endorse a constructivist conception of organisations: organisations
cannot be defined physically, but only by means of shared representations. Put differently,
organisations are what we describe they are.
With a background in AI and psychology, René Jorna has suggested a broader definition of
semiotics than traditionally accepted: semiotics is the study of all sorts of sign processes in
communication and exchange of knowledge, in the sense of data, between and inside information processing systems, such as humans, other organisms and machines.
This broader conception of signs is different from the one advocated by Nake, Stamper and
Andersen who distinguish sharply between sign usage by humans, and signal-processing in
machines. The difference of opinion is a reflection of a broader controversy within semiotics
itself: is sign-behaviour a unique characteristic of the human species that is qualitatively different from other types of animal behaviour, or is it merely a position on a gradual scale of
behaviour, ranging from biochemical processes (e.g. the function of the genome and the immune system, as suggested by some biologists) to complex meaning-creation? The problem
with the former assumption is that it prevents us from giving an evolutionary account of
semiosis: semiosis seems to pop out of thin air. The latter position encourages an evolutionary explanation, but runs the risk of making the notion of signs so broad that it becomes useless.
P. Bøgh Andersen and B. Holmqvist (Holmqvist & Andersen 1991) have also conducted
research in organisational communication. In a number of empirical projects, they have described the interplay between work tasks, communication and usage of information technology.
The basic assumption in the projects has been that information technology is a medium on
a par with traditional media (oral and written communication; diagrams; pictures, etc.). From
this point of view, many processes, normally understood as data storage and retrieval, are
really communicative processes, cf. de Souza and Fig. 3.2. For example, a database is not
merely a storage of information in the form of data, but also a channel of communication.
This means that it must conform to the normal principles of communication by providing a
context for Interpretation. A patient database in a hospital ought to indicate the sender of the
information (knowing the responsible doctor is important for interpretation) and possibly an
email or phone connection that enables the interpreting doctor or nurse to easily enter the
kind of negotiations which is so frequent in communication. Similar ideas are found in
Stamper’s work.
3.3
Technical processes
Frieder Nake’s point – that algorithmic signs are double and incorporate a causal signal process – implies that semiotics should develop an understanding of the technical aspects of computer systems, in so far they are relevant to semiosis. This is in accordance with the sign concept, since it integrates a physical (the expression) as well as a psychic side (the content) in
its very foundation, so semiotics can talk about Representations (the algorithms and data
9
structures) (Gorn 1968, Andersen, Hasle & Brandt 1997, Piotrowski 1993) as well as the
user’s Interpretation of these Representations in a systematic way. Thus, the interplay of the
physical and the psychic is integrated deeply in semiotics, because it is where meaning is created. This will be a major issue4 in Section 4 where a synthesis is attempted.
The Computational Semiotics effort (main figure Burghard B. Rieger) uses semiotics as a
theoretical framework for technical systems design. However, I shall not discuss this effort
here for the very good reason that I know too little about it and do not understand it properly.
Program text
I
R
I
operational
semantics
compiler &
run time
system
O
I/O-function,
execution
sequences
Fig. 3.6. Causal and Intentional Interpretations. From Andersen, Hasle & Brandt 1997.
The double nature of the algorithmic sign can be captured in a Peircean semiotics by claiming
that a program text is a Representation that denotes the I/O-functions and the execution sequence of a machine, but has two different Interpretations, an intentional and a causal one.
The intentional one is written by the language designer in the form of a formal semantics of
the language, e.g. an operational semantics specifying which actions some virtual machine
should take when running the program; the causal one is given by the compiler and runtime
system that implements the language and actually runs the program. See Fig. 3.6. This analysis requires us to accept that it makes sense to talk about a causal Interpretation. One uncontroversial definition could be a mechanical process that replaces a previous intentional Interpretation for routine functions. Mechanical Interpretations can in fact be seen as “crystallized” habits in a number of cases, but what about mechanical devices that do not replace
previous manual operations?
A closer scrutiny of the sign-complexes used during development and use of computer systems, reveals a rich harvest: Fig. 3.7, adapted from Halskov 1996, includes at least five identifiable sign-complexes in a bank system (R = Representation, O = Object, I = Interpretation):
(1)
(2)
(3)
(4)
(5)
domain words like “customer” are used by the staff to denote the real customers;
the designer starts system development by representing the domain words as formal
specifications;
the programmer transforms the specifications into a program text that denotes a system
and its executions;
the system contains objects that are taken by the programmer to mean properties and
processes of the interface;
the interface is taken by the staff to refer to events and properties of the customers.
Fig. 3.7 describes a coherent sign complex by letting the same entity play more than one semiotic role. Typically, an entity functions as an Object in one sign and as Representation in
another. Thus, the domain word “customer” is the Object of a formal Representation but is
itself a Representation that denotes the real customers. Similarly, the system is the Object of
the program, but also a Representation that denotes properties and processes of the interface
10
in addition to referring to the real customers. This classifies program texts as meta signs that
refer to other signs that refer to other signs…
The program itself is in fact full of internal meta signs referring to other pieces of the program: for example, a method invocation is a meta-sign that refers to the declaration of the
method in the very concrete sense that invocation of the call causes the instantiated declaration to replace the invocation and be executed. An object-oriented program is a huge structure
of definitions referring to other definitions referring to other definitions.
Designer's translation of domain
words into formal specification
I2
R2
O2
"customer"
"customer: (# name:
address:
#)"
R1
R3
Programmer's
interpretation in
terms of the I3
programming
language
I1
O3
Interpretation
of domain
words made
by staff
O1
A real customer
The system
O5
name
Smith
Address
Beech Drive
R4
O4
Name:
Balance:
Smith
Address:
317 Beach
Dr.
R5
owner
balance
317
line
2pt
I5
I4
The programmer's interpretation
of the system in terms of the
interface
The staff''s interpretation of the
interface in terms of real customers
Fig. 3.7. Sign complexes in systems design and use. Adapted from Madsen 1996.
Fig. 3.7 does not exhaust the actual semioses. For example, if the system behaves unexpectedly, the user cannot help guessing what is wrong, which means that he will use the interface
as a Representation of the internal parts of the system, reversing ROI4. Also, the programmer
will probably not only view his program as a prescription for the technical system’s behavior,
but parts of it can be interpreted as general statements about the domain, in this case the bank.
Class customer, for example, can be read as asserting claims that, in this domain, customers
will always have a name and an address. Possibly, this is a simple case of metonymy, as when
we use the word “crown” instead of the king: the configuration “crown” (R) − crown (O=R)
− king(O) becomes “crown” (R) − king (O). In the case of computer systems Program (R) −
Execution (O=R) − Domain objects(O) becomes Program (R) − Domain events (O).
Large well-structured systems can in fact be partitioned into sections, each with a specific
domain and rule of interpretation: some parts refer to the physical parts of the system, other
parts to computer science concepts like lists, queues, and stacks, and still others to processes
and objects in the domain. The ability to write code that can be interpreted as assertions about
the domain was in fact a major impetus for developing higher level languages, since assem-
11
bler code was bound to a machine near Interpretation. Such changes of Interpretation can be
accomplished by the special configuration of computer based signs shown in Fig. 3.8.
I1
Domain language
R1
Use of method
I2
Domain language
O1-2
Domain objects
I3
Programming language
R2-3
Declaration of
method
I4
Programming language
O3 = R4
O4
Formal entities
Fig. 3.8. Sign configuration supporting change of domain.
We want to build an inventory control system, and an important domain concept is warehouse and entering and taking out goods from the warehouse. Thus, we want to write something like code 3.1: if the warehouse holds goods they can be taken out.
If theWareHouse.holds(theGoods) then theWareHouse.takesOut (theGoods)
Code 3.1. Example of R1.
This cannot be executed immediately, since the system cannot manipulate warehouses and
goods, only Representations. So code 3.1 must be translated into transformations of Representations. This is the job of the translator class R2 shown in Code 3.2.
public class WareHouse {
class Goods extends ListContent;
public void stores(Goods g){aList.insert(g)}
public boolean holds(Goods g) ){return aList.hasMember(g)}
public void takesOut(Goods g) ){aList.removes(g)}
private List aList;
}
Code 3.2. Example of R2-3.
The special feature of R2-3 is that it is ambiguous, having two Interpretations and two Objects.
If we only read the italicised name of the methods and variables, it can be taken to refer to the
warehouse (O1-2): it can hold goods, and goods can be taken out and stored. But if we read the
implementations of the methods, they refer to a different world containing formal elements
like lists and members of lists (O3 = R4). This is the third ingredient in the recipe:
Class List {
public class Element {Listcontent content; Element successor}
…
public void removes(ListContent lc){
If first =/= nil then {
if first.content == lc then first = first.successor
...
}
}
…
}
Code 3.3. Example of R4.
Class List has moved into a completely different world consisting of lists, list elements, list
contents and successors.
12
The example shows that three processes are involved in changing interpretation from formal objects to warehouses.
1. Mapping. Ensure that there is a homeomorphic mapping, ϕ, Interpretation2 → Interpretation3,
such
that
ϕ{Operation2
(Operands2)}
=
ϕ{Operation2}(ϕ{Operands2}), meaning that the mapping of applying a formal operation to
a formal operand should equal the result of applying the mapping of the operation
to the mapping of the operand.
2. Encapsulation. Hide the parts that can only be interpreted as manipulation of Representations.
3. Vocabulary. Choose words that are identical to words that already have a domain
Interpretation.
Class WareHouse definitely fulfills (1). Executing the formal expression “aWareHouse.
stores(someGoods)” leads to a formal situation where “aWareHouse.holds(someGoods)”
evaluates to true, and the real world mapping of this state, ϕ{“aWareHouse.holds (someGoods)”}, is that the warehouse holds the goods in the real world. On the other hand, if we
use the following Interpretations, ϕ{“stores”} = storing real goods by means of truck,
ϕ{“someGoods”} = real goods, and ϕ{“aWareHouse”} = a real warehouse, then the data
process “aWareHouse.stores(someGoods)” maps to the real world event of using a truck to
store goods in a warehouse that also leads to a state where the warehouse holds the goods.
(2) All the formal concepts are hidden inside the List class. The only thing remaining is the
translation process described in the WareHouse methods. For example, “public void
takesOut(Goods g){aList.removes(g)}” means that an invocation of “takesOut” should be replaced by an invocation of “aList.removes(g)”. The List itself is declared as private, and this
means that classes using WareHouse cannot see it.
(3) There is a preexisting Interpretation that gives meaning to the words in the WareHouse
class.
The last technical domain that has been analyzed as signs is design patterns that are discussed by Noble & Biddle (2001):
An object-oriented design pattern is a “description of communicating objects and classes that are
customized to solve a general design problem in a particular context”
Noble & Biddle 2001: 1
The purpose of design patterns is to re-use experience collected in past software development; why invent hot water when someone else has already done it? But design patterns are
also a semiotic phenomenon that is required to have a meaning that can be expressed in a few
sentences, the intent of the pattern (Noble & Biddle 2001). Good solutions are in this way
enabled to enter the conversation of the developer community (Grand 1998: 1), but patterns
can also be used as a guide for interpreting programs (Noble & Biddle 2001: 8). Thus, design
patterns seem to play the role of Interpretation when we analyze, discuss and design complex
systems.
The idea is that knowledge of patterns makes it easier to understand the intent of a piece of
program, and, since patterns have names, it also becomes easier to discuss the program with
colleagues. According to Noble & Biddle 2001, patterns form a two-layered sign: (1) the
name of the pattern refers to the pattern description, but the pattern description is itself a new
sign (2) where the solution, e.g. in the shape of a UML-diagram, stands for its intent and context of use (Fig. 3.9).
13
Expression:
solution
Expression:
name
Content:
intent
Content = Sign
pattern-description
Sign:
pattern
Fig. 3.9. Design patterns as two-layered signs
4
Synthesis
The purpose of this last section is to set up at framework that can accommodate most, but not
all, of the different models described above. It must fulfil the following requirements:
1
2
3
4
5
6
7
8
9
The sign concept is relational (Saussure, Hjelmselv, Peirce)
Signs articulate both content and expression (Hjelmslev, Fig. 2.1)
The interface is a Representation (Nadin, Fig. 2.2)
As all other media, computer based signs can be part of aesthetic activities (Nake)
Computer based signs are unique in exploiting handling and action features (Andersen,
Fig. 3.4) and give rise to unique types of communication patterns (de Souza, Fig. 3.2)
Mechanical causal processes are intertwined with semiosis (Nake, Fig. 3.3).
Mechanical analogues to semiosis occur during use (Andersen, Hasle & Brandt, Fig.
3.6).
Semiosis occurs not only during use but also during design (Madsen, Fig. 3.7, Noble &
Biddle, Fig. 3.9).
Computer based signs are a minor part of the total information system and should support business processes (Stamper, Fig. 3.5).
The framework proposed is a very simple one (Bødker & Andersen draft). It combines two
triangles, the semiotic triangle from Fig. 2.2 and the activity theory triangle which simply
says that human activity consists of a Subject manipulating some Object mediated by some
Mediator, often exemplified as a tool. In many (but not all) cases, the filler of the Mediator
role in activity theory is also a filler of the Representation role in a sign. For example, a
wheel and its transmission mechanism is a Mediator between the driver (the Subject of the
driving activity) and the front wheels, but it is simultaneously a Representation of the position of these wheels to the driver. Other combinations can be observed but I shall disregard
them here: for example, the Mediator can be a Representation of the Subject (big car big
man) or the work Object can be a Representation of the Mediator (a harrowed field is a sign
of the harrow). This quadrilateral sign covers both instrumental usage that aims at changing a
work object and semiotic usage that aims at changing or stabilizing Interpretations. The difference between instrumental and semiotic processes is a gradual one in the model; in the
former, the Mediator-Object relation dominates, in the latter the Mediator – Interpretation
relation dominates. The model thus predicts many intermediate forms between instrumental
and semiotic activities. Consider for example a change of course of a ship. On the face of it is
a purely instrumental activity, but good seamanship requires that the manoeuvre is not too
14
smooth, so than surrounding ships can see that it is changing course. Thus, the manoeuvre
should be staged in such a way that it is in fact interpreted as a course change.
A number of empirical predictions follow from the model, for example that all instrumental
actions involves Interpretation (Bødker & Andersen draft, Andersen 2003 a,b), and that instrumental and semiotic actions can gradually morph into one another. This is in fact true, as
the systematic introduction of self-service shows. In many of these cases, an instrumental action (using the cash dispenser) replaces a communicative one (asking the cashier for the
money). This model fulfils requirements 1, 3 and 5, and captures both the tool and media aspect of computer systems.
Mediator/Representation
Subject
Interpretation
Object
Fig. 4.1. The quadrilateral sign
Mediator/Representation
Controls, displays, actuators and sensors
Subject
Operator
Interpretation
Object
Work object
Fig. 4.2. Instrumental mediation.
Mediator/Representation
Utterances
Subjects
Interlocutors
Interpretation
Object
Theme of conversation
Fig. 4.3. Semiotic mediation.
The model also predicts analogies between instrumental and semiotic processes. For example,
in both cases, the Mediator may depend upon the Object, or the Object may depend upon the
Mediator. In the instrumental case, this distinguishes displays/sensors from controls/actuators, and in the semiotic case assertions are distinguished from performatives that
can change their Object if it is a social relation.
As a Representation of the Interpretation, I suggest the theory of thematic roles (Fillmore
1968, 1977). It is based on the notion of events, actions (= events caused deliberately by a
conscious Agent), and participants. Events have a limited number of participants that can
play a limited number of roles, such as Agent, Experiencer, Theme, Instrument, Beneficiary,
etc. (see Andersen 2003a,b and Jurafsky & Martin 2000) in relation to the event. The roles
are relational (cf. Requirement 1); for example in the event I (Agent) wrote this paper
(Theme) by means of Microsoft Word (Instrument) “I” function as the Agent in relation to the
paper and the software, and the software functions as an Instrument. In other events, “I” and
“Microsoft Word” can play other roles; for example, in MS Word irritates me, “MS Word”
plays the role of Cause and “I” the role of Experiencer. Roles can be filled with different
kinds of entities and there are restrictions on these fillers. For example, Agents and Experiencers can only have conscious fillers, whereas Causes can be almost any causal force, including machinery and natural forces.
One way to specify an information system is to describe the intended Interpretations of the
system. In agreement with requirement 9, we want to describe the system components in relation to the work they are used in. Fig. 4.4 shows an example from a project on maritime automation (website http://www.cs.auc.dk /~pba/ElasticSystems). In the task of following the
track of the voyage plan, the officer can be the Agent, but the V(oyage) Management System
can alternatively function as the Cause. The Theme of the action is the track, and the Instruments used include autopilot, main engine, and various displays and sensors. In Fig. 4.4 the
work processes provide the main structure of the description, and technology is plugged in
with a clear indication of its function.
15
Just as we need causal Interpretations (see Section 3.3), we also need mechanical versions
of the Agent role, and I have followed normal case-theory using the role of Cause for this
purpose. However, we need to specify in which contexts Agents and Causes are treated as
different. In our data, automation and crew were treated alike in the verbal utterances of the
crew, so that they could say “The captain sails the ship” and “The VMS system sails the
ship”, both occupying the position of grammatical subject. However, in the actions, the two
were treated very differently, automation being used in simple manoeuvring situations, and
manual operation in the difficult ones. This is a strong motivation for having two roles. The
conclusion is that the same Interpretation may be realized differently, depending upon
whether it is converted into speech (by means of linking rules, Valin & LaPolla 1997) or is
converted into instrumental action.
Agent
Action
Plans
Theme
Voyage
Instrument
VMS
Agent/Cause
Action
Follows
Theme
Track
Instrument
Main engine
Course Cmd Display
Course Display
GPS1 or GPS2
Autopilot
Agent/Cause
Officer
Action
Sets and
maintains
Theme
Course
Instrument
Rudder Angle Display
Gyro1 or Gyro2
Servo machine
Agent/Cause
Action
Sets
Theme
Rudder angle
Instrument
Rudder machine
Fig. 4.4. Steering system of ship
Fig. 4.5 captures this difference. Fig. 4.5 has turned the question of the roles of humans and
machines from a philosophical into an empirical issue: on the one hand, the question is now
about which roles humans and machine can fill (Agent, Cause, Theme, Beneficiary); on the
other hand, if explicit linking and execution rules are given, mapping role structures into observable verbal and instrumental behaviour, then empirical investigations are relevant to the
question. The answers produced in this way will have the usual muddled nature of empirical
research: in our maritime example, the answer was that the Cause/Agent distinction tends to
disappear in verbal behaviour, but is maintained in instrumental behaviour.
16
Mediator/Representation
g
in
nk
Li
s
le
ru
Representation
Subjects
Subjects
Interpretation
Ex
ec
ut
io
n
ru
le
s
Interpretation
Object
4.5. Verbalizing and executing actions
Object
Fig. 4.6. Communication as a perturbation between
a social system of Representations and individual systems of Interpretations.
Let us now turn to the problem of the autonomy of sign-systems underlying requirements 2
and 4.
The European model claims that sign systems articulate our world in the sense that they
simultaneously provide the means of expression and the categories we use to understand our
world (requirement 2). They stress the super-individual nature of sign systems and the limited
extent to which the individual can change them: they were here before we were born, and stay
on when we die. This autonomy is in my opinion related to the aesthetic experience (requirement 4). Aesthetics consists in experimenting with a sign-system, coaxing it to yield new responses and insights. Both requirements presuppose that sign-systems have more autonomy
and are less controllable than tools, otherwise there would be no reason for experiment and
surprise, and we could just read the manual/grammar.
In the traditional conveyor tube model of communication (Fiske 1990), sign-systems are
passive conduits we use to impress our thoughts onto the minds of our interlocutors, but this
idea leaves too many empirical observations unexplained (for arguments, see Andersen
2002). Instead I suggest that we view communicative processes as a co-construction of texts
(Wells 2002): the model is shown in Fig. 4.6 that is a further specialization of Fig. 4.3.
The claims are the following: communication is an activity that involves two or more subjects that take turns to transform previous Representations (utterances) into new ones. This is
the meaning of the “loop” in the figure. This collaborative change of Representations is a social and publicly accessible activity.
However, the Representations are part of a sign-system we only partly control and understand; it has its own social logic that has developed during the millennia, but it also interacts
with our individual Interpretations, sometimes in unpredictable ways. The Representations do
not travel directly from speaker mind to hearer mind, but rather perturbs the ongoing production of new Interpretations out of the old ones in each interlocutor. Thus, communication consists of two repetitive recursive processes that perturb one another (Andersen 2002):
(1)
(2)
Speaking: Representationt Æ Representationt+1 / in the context of Interpretationt
Interpreting: InterpretationtÆ Interpretationt+1/ in the context of Representationt
This predicts that the meanings we assign to Representations are heavily dependent upon the
interpretative processes we are currently engaged in, so that polysemi, ambiguity and misunderstandings are the rule rather than the exception. This also makes meta-communication
mandatory, since this is the only way we can repair faulty communication. And language in
fact allows for meta-signs to the extent that it has become inconsistent (the liar sentence This
sentence is false), much to the chagrin of logicians. Finally, (1)-(2) predicts the phenomenon
17
of textual coherence, such as anaphoric expressions, since new utterances are not produced
from scratch but are transformations of previous utterances.
This idea of two interacting but autonomous systems predicts the phenomenon of mutual
articulation: when the syntactic system of Representations is perturbed by Interpretations, as
in (2), it sometimes reacts by exchanging one Representation for another, and in this way induces an articulation of semantic contents. For example, at some point of time, and the age
change of a human causes the language system to replace “child” by “adult” which induces a
meaning boundary in the continuum of age. Similarly, when the semantic system is perturbed
by Representations, as in (1), it sometimes reacts by exchanging Interpretations. Replacing
[k] by [s] in [-at] changes its meaning from a domestic animal to an activity in the past tense.
In this case, the interpretative system imposes a boundary between phonetic stops and sibilants. This process is called the commutation test and is the basis of linguistic analysis.
All forms for semiosis follow the perpetual pattern in Fig. 4.7 that is generated by (1) and
(2). At a given point in time, Subject 1 and Subject 2 have produced Int(erpretation)1, Int4,
and Repr(esentation)1. Then Subject 1 changes Repr1 into Repr2 influenced by Int1 and also
changes his Int1 into Int2 influenced by what he has himself just said. Subject 2 changes his
Int4 into Int5 influenced by Repr2, and changes Repr2 into Repr3, influenced by his current
Interpretation, Int5, and so on and so on. This is a general pattern meant to cover monologues,
as well as dialogues, verbal exchanges as well as codewriting.
Subject 1
Interpretations
Int1
Shared
Representations
Repr1
Subject 2
Interpretations
Int4
Int2
Repr2
Int3
Repr3
Int5
Int6
Fig. 4.7. Basic form of semiosis
Aesthetic activities are activities that focus on the exploration of such “alien” systems, and
they often use the commutation test systematically: what are the meaning effects if I exchange a word or a colour by another? In addition, aesthetic activities invent new distinctions
some of which later become codified.
Finally, let us address requirements 5, 6 and 7 which all concern the relation between
semiosis and mechanical processes. I have already argued that in the case of language, the
distribution of agency is not a clear-cut issue: we can exploit the opportunities language gives
us, but we cannot change everything, if we still want to be understood. Talking is a joint venture between the speakers (Subjects) and their language (Mediator). This again means that we
have to attribute some degree of autonomy of behaviour to the Mediator of Fig. 4.6. Some of
this autonomy has only an evolutionary historical explanation, but other parts have been
added consciously by means of (meta-) communication: we need to be able to talk about language. In particular, some of the rules we use for interpreting Representations, can themselves be turned into Representations by means of the standard rule, Representationt Æ Representationt+1 / Interpretationt. Our Interpretation of the previous Representation normally
motivates the Representation in the next turn, and if the Interpretation has the nature of a rule,
then the Representation will represent a rule: we can partially describe the rules we follow in
18
our interpretative process. The last step is the invention of machines that can process Representations – as appears from the preceding sections, there is a consensus that computers can
do at least this. This is the explanation offered by the model of how causal Interpretations
come about: they are Representations of interpretative rules we ourselves followed previously. Thus, a formal semantics is the rules we employ ourselves to read a piece of program,
and the compiler is a machine-executable Representation of this. Thus, we claim that both
Interpretations of instrumental and semiotic activities can be turned into Representations of a
kind that can be executed by a machine. This parallel between these two kinds of activities is
predicted by the proposed model.
Representation
carton, barcode, inventory control,
EDI messages
Subjects:
Consumer,
supermarket, diary
Interpretation
delivery of milk
Object
milk
Fig. 4.8. Use of computer systems.
The model specialized to capture use of computers is shown in Fig. 4.8. It is similar to Fig.
4.6, except that the Mediator/Representation role is occupied by computer systems. Fig. 4.8
extends the notion of co-construction of Representations to the use of computer systems. In
Fig. 4.8, the consumer, the supermarket, and the diary are subjects that together co-construct
the data processes involving milk-carton, barcode, inventory control systems, and EDIsystem. The Mediator mediating these Representations is computer systems that can transform Representations into other Representations; in addition they act as sensors via the barcode reader, and as actuators via the production and transportation machinery of the diary, so
that the Representations and the Object are mutually dependent: lack of milk influences the
system, which the next day influence the supermarket’s stock of milk.
Finally, the Interpretation is the human surveillance of the whole process. Under normal
operation, this Interpretation is absent, but if the system detects abnormal conditions – e.g. an
unusual quantity of milk for the season – it alerts human operators that must access the situation.
To summarize: the combined models in Figs. 4.6 and 4.8 claim that any activity, instrumental or semiotic, involves four roles: one or more Subjects, Mediators/Representations,
Interpretations, and Objects, where the Mediator often represents the Object under a certain
Interpretation. The Representation and the Interpretation take part in recursive processes,
where Representations are transformed into new Representations, and Interpretations into
new Interpretations. One process uses the other one as context (as parameter), and in this way
they indirectly influence one another. The Mediator/Representation is not a passive carrier of
meaning, but can have varying degrees of autonomy. Tools have little autonomy, automatic
machinery has much. If the Mediator – Object interaction dominates, the activity is mainly
instrumental, whereas it is mainly semiotic if the Mediator – Interpretation interaction dominates. The Mediator can influence the Object and conversely. In the former case, the Media-
19
tor is an actuator (instrumental activity) or a performative (semiotic activity); in the latter
case, the Mediator is a sensor or an assertion. Interpretations give rise to new Mediators; in
the instrumental case, these Mediators are machinery, whereas they can be computer systems
in the semiotic activity. In both cases, the new Mediator may have a certain autonomy, and
can be said to embody or crystallize physical or semiotic operations.
5
References
Andersen, P. B. (19901997). A theory of computer semiotics. Semiotic approaches to construction and
assessment of computer systems. Cambridge University Press: Cambridge.
Andersen, P. B. (1992). Computer semiotics. Scandinavian Journal of Information systems. Vol. 4:
1992, 3-30.
Andersen, P. B. (2002). Dynamic semiotics. Semiotica 139 –1/4 (2002), 161-210.
Andersen, P. Bøgh (2003a). Saying and Doing at sea. ALOIS 2003, Action in Language, Organisations and Information Systems. Lindkoping University, 12-13 March.
Andersen, P. Bøgh (2003b). Anticipated Activities in Process Control, Literary Fiction, and Business
Processes. Anticipated Activities in Process Control, Literary Fiction, and Business Processes.
Proc. of the 6th Int. Workshop on Organizational Semiotics. Dept. of Computer Science, University
of Reading: Reading: 1 -28.
Andersen, P. Bøgh, P. Hasle & P. Aa. Brandt (1997). Machine semiosis. In Posner, Roland; Robering,
Klaus; Sebeok, Thomas A. (eds.), Semiotics: a Handbook about the Sign-Theoretic Foundations of
Nature and Culture (Vol. 1). Walter de Gruyter, Berlin. pp. 548-570.
Bødker, S. & P. Bøgh Andersen (draft). Complex mediation.
Fillmore, Ch. J. (1968). The case for case. In: E. Bach & R.T. Harms (eds.), Universals in Linguistic
Theory. 1-90. London, New York, Sydney, Toronto: Holt, Rinehart and Winston.
Fillmore, Ch. J. (1977). The case for case reopened. In P. Cole and G. M. Sadock (eds.): Syntax and
Semantics: 8. Grammatical Relations. 59-81. New York: Academic Press.
Fiske, J. (1990). Introduction to communication studies. London: Routledge.
Gamma, E., E. Helm, R. Johnson & J. Vlissides (1995). Design Patterns. Addison-Wesley, Boston.
Grand, M. (1998). Patterns in Java, vol 1. Wiley, New York.
Gorn, S. (1968). The identification of the computer and information sciences: their fundamental semiotic concepts and relationships. Foundations of Language 4: 339-372.
Holmqvist, B. & P. Bøgh Andersen (1991). Language, perspective, and design. In Design at Work,
eds. J. Greenbaum and M. Kyng, 91-121. Hillsdale: Earlbaum.
Jorna, R.J., Van Heusden, B. & Posner, R. (eds.) (1993). Signs, Search and Communication; Semiotic
Aspects of Artificial Intelligence. Berlin: Walter de Gruyter.
Jurafsky, D. & J. H. Martin (2000). Speech and Language Processing. New Jersey: Prentice-Hall.
Liu, K. (2000) Semiotics in Information Systems Engineering. Cambridge: Cambridge University
Press.
Madsen, K. H. (1996). Object-oriented Programming and Semiotics. In: B. Holmqvist, P. B. Andersen, H. Klein, R. Posner (eds.) Signs of Work. Gruyter: Berlin: 107-.
Nadin, M. (1988). Interface design: A semiotic paradigm. Semiotica 69: 269-302.
Nadin, M. (1995). Negotiation the World of Make-Believe: The Aesthetic Compass, in Real Time Imaging, London: Academic Press 1, pp. 173-190.
Nake, F. (1994a) (ed). Zeichen und Gebrauchwert. Beiträge zur Maschinisierung von kopfarbeit.
Bericht 6/94. Universität Bremen, Fachbericht Mathematik und Informatik.
Nake, F. (1994c). Elementares geometrisches Konstruieren am Computer. In: Nake 1994a.
Nake, F. (1994b). Human-computer interaction: signs and signals interfacing. Languages of Design 2
(1994) 193-205
Noble, J. & R. Biddle (2001). Patterns as Signs. Computer Science, Victoria University of Wellington, New Zealand, CS-TR-01-16.
20
Noble, J., R. Biddle & E. Tempero (2001). Metaphor and Metonymy in Object-Oriented Design Patterns. Computer Science, Victoria University of Wellington, New Zealand, CS-TR-01-7.
Piotrowski, D. (1993) Structuralism, computation and cognition. The contribution of glossematics. In:
Andersen, Holmqvist & Jensen (eds.), The Computer as Medium. Cambridge University Press,
1993. 68 - 91.
Souza, C. Sieckenius De (1993). The semiotic engineering of user interface languages. Int. J. ManMachine Studies 39, pp. 753-773.
Souza, C. Sieckenius De (1999). Semiotic engineering principles for evaluating end-user programming environments. In C.J.P. de Lucena ed., Monografias em Ciência da Computação. PUC-Rio
Inf MCC10/99. Computer Science Department, PUC-Rio, Brazil.
Stamper, R. (1992). Signs organizations, norms and information systems. Proc. Third Australian Conference on Information Systems. 21-55. Wollongong, Australia.
Stamper, R. (1996). Signs, Information, Norms and Systems. In: Holmqvist, Andersen, Klein &
Posner (eds.) Signs of Work. Gruyter: Berlin, 349-399.
Valin, Van R. D. & R. J. LaPolla (1997). Syntax, Ch. 3. Cambridge University Press: Cambridge.
Wells, G. (2002). The Role of Dialogue in Activity Theory. Mind, Culture and Activity 9(1), 43-66.
Zemanek, H. (1966). Semiotics and programming languages. CACM 9/3, 139-143.