On formal analysis of emergent properties

On formal analysis of emergent properties
Jacek Malec∗
Department of Computer and Information Science
Linköping University
S–581 83 Linköping, Sweden
[email protected]
Abstract
Brooks postulates in his papers [6, 7] the possibility that intelligence
can emerge out of a set of simple, loosely coupled behaviours. This possibility is conjectured on the basis of experiments with a set of robots
equipped with subsumption-based control systems. In this paper we
briefly present The Behavior Language (BL) used for programming subsumption systems and we show that any BL program is equivalent to a
statechart. Then we analyze a simple emergent behaviour and argue that
it actually is obtained through appropriate tuning of delays defined in the
controlling BL program. The conclusion is that emergent properties arise
(if at all) due to the complex dynamics of interactions among the simple
behaviours and that this emergence is to a large extent accidental.
1
Introduction
The problem of designing an autonomous system capable of acting in the real
world has been the subject of much attention in recent years. The research has
focused on designing systems with the following attributes:
• reactivity, in order to allow the system to cope with unpredictable changes
in the dynamic environment while pursuing its mission;
• robustness, meaning the ability to function in a variety of situations, including failure of some of its subsystems;
• selectivity of attention, in order to effectively use existing resources of the
system, such as computing power, or sensory equipment;
• ability to pursue goals defined either by the designer, or by the system
itself.
A lot of research on this topic is currently being done within the “behaviororiented” paradigm. This paradigm is the result of work initiated and pursued at
the MIT AI Lab by Brooks and his co-workers [1, 3, 5, 6, 8, 15], and augmented
by research conducted in other places [2, 16, 13]. The main idea consists of
∗ This research has been supported by the Center for Industrial Information Technology
(CENIIT).
1
building a controller for an autonomous (possibly intelligent) system out of a
set of primitive reactive behaviors that couple sensory inputs of the system to its
actuating outputs. Ultimately a system built this way should reveal the ability
to cope effectively with dynamic and unpredictable changes in the real world
while performing its task. Rodney Brooks postulates in some of his papers (e.g.,
[6, 7]) the possibility that even intelligence can emerge out of a (possibly large)
set of simple, loosely coupled behaviours. This possibility is conjectured on
the basis of experiments with a set of robots equipped with subsumption-based
control systems. The results obtained so far, both at MIT and at other places,
prove that this approach is worth further investigation but, at the same time,
lacks careful examination both at the methodology level and at the control (or
system) engineering side.
The major open problems of the behavior-based approach, according to
Brooks [6], are the following:
1. “Understanding the dynamics of how an individual behavior couples with
the environment via the robot’s sensors and actuators.”
2. “Understanding how many behaviors can be integrated into a single robot.”
3. “Understanding how multiple robots can interact as they go about their
business.”
These questions have been already addressed by the research done within the
competing paradigm, which we might call the symbolic-AI-based approach to
the problem. The existing answers are, however, formulated with a totally
different framework in mind, with the notable exception of Rosenschein and
Kaelbling’s research. In the behavior-based approach, behaviors are the basic
building blocks, and the problem of constructing a complex autonomous system
amounts to the synthesis problem. In the symbolic-AI-based approach, a final
result (i.e., a complex behavior as a whole) is the starting point, and the design
of a system can be seen as the analysis problem. Of course, both approaches
use different tools, and, to some extent, even different languages. This paper
can be seen as a preliminary step towards analysis of the first problem stated
above.
In the paper we briefly present The Behaviour Language (BL) used for programming subsumption systems and we show that any BL program is equivalent
to a statechart, i.e., to a finite-state machine with timing affected by so called
timers (explicit delay elements). So the conclusion of this part of the paper is
that BL does not offer any theoretical or conceptual advantages over existing,
established formalisms (except, maybe, the conciseness of representation). Then
we analyze a BL implementation of an emergent behaviour (for simplicity we
have chosen wall-following) and argue that it can be (and actually is) obtained
through appropriate tuning of the delays introduced by the timers defined in
the controlling BL program.
The conclusion of the paper is that emergent properties can arise (if at all)
due to the complex dynamics of interactions among the behaviours and that
this emergence is to large extent accidental when using BL, as opposed to a
systematic design of a control system where the intended behaviour of the robot
is used as a guideline from the very beginning and, moreover, can be inferred
from the control program.
2
2
2.1
The languages
The Behavior Language
The Behavior Language (BL) is a programming language with Lisp-like syntax
(and Lisp-based compiler) for specifying behavior-based controllers [4]. Using
the constructs of the language one can define a set of behaviors comprising the
control system of a robot. Such a set of behaviors is usually intended (by the
designer) to make the robot achieve some predefined set of goals. However,
those goals are only implicit in the specification (thus the name “emergent
functionality”). We will not present the full syntax of BL here, rather we will
limit ourselves to some crucial elements of the language.
The behavior language is based on the assumption that every behavior is expressible as a set of so-called “real-time rules”. A real-time rule is an expression
of the form:
(whenever condition &rest body-forms)
or
(exclusive &rest whenever-forms)
and is executed independently of other rules (at least in the ideal case), in realtime, possibly by an Augmented Finite State Machine (AFSM). An AFSM is a
finite state machine equipped with registers and timing elements.
The condition field in a real-time rule can be occupied by one of the following:
t stating that the body of the rule will be unconditionally evaluated every
certain amount of time (called the characteristic time of the system);
monostable the condition is true for the duration of the triggering of the
named monostable;
(delay τ ) like condition t above, but overriding the characteristic time by τ ;
(received? register ) the condition is true if a message has been deposited
in the named register since the start of waiting in this whenever clause;
(not|and|or &rest forms ) a limited set of logical combinations of the conditions described above is also possible.
Observe that all those conditions refer either explicitly or implicitly to some
timing element. On the other hand, there is an explicit assumption that any of
those timers is not synchronized with any other, i.e., they are totally independent.
Behaviors are defined either as stand-alone real-time rules or as groups of
such rules. A stand-alone rule is named by:
(defmachine name declarations rule)
and complex behaviors are constructed using the defbehavior facility:
(defbehavior name inputs outputs declarations rules).
Then input and output registers of behaviors can be connected by “wires” according to the designer’s intentions:
(connect source dest1 &rest more-dests)
connect statement is also used to implement suppression and inhibition mechanisms.
3
The core idea of the behavior-based approach is an asynchronous, independent activation of behaviors. The following mechanisms are used for activating
behaviors [4]:
• direct activation by some condition,
• thresholding,
• hormone system (conditions, releasers),
• spreading of activation.
We end this short presentation at this point, asking the reader interested in
details to refer to the original document [4].
2.2
Statecharts
An interesting language for specifying behaviors of reactive systems is the statecharts formalism [10] or its more generic incarnation, higraphs [11]. Statecharts
are used to define complex, hierarchical finite state automata. Therefore, they
seem to be a very plausible candidate for specifying autonomous agent’s behavior. Their advantages are the following:
• well-defined (in the formal sense) semantics;
• graphical representation (which means user-friendliness);
• existence of implemented support tools (e.g., STATEMATE [12]).
Due to their formal grounding in automata theory, statecharts have strong
connections to the situated-automata approach of Kaelbling and Rosenschein;
however, no tools are provided in the formalism for expressing goals of a system
acting in some environment. On the other hand, statecharts are very intuitive
as a specification tool: one can easily express interdependencies between various
elements of a system, so the conceptualization process is supported much more
strongly than in other approaches.
We begin the presentation of statecharts by quoting the formal definition of
the more general construct of higraph [11].
Definition. A higraph is a quadruple H = (B, σ, π, E) where B is a finite set of
blobs and E is a set of edges (a binary relation on blobs). The subblob function
σ : B → 2B assigns to each blob x ∈ B the set σ(x) of its subblobs; it is assumed
that this function is cycle free. The function π : B → 2B×B introduces an equivalence relation on subblobs; these classes of equivalence on σ(x) for a given blob
x will be denoted as π(x). The atomic blobs, are those that do not have any
subblobs (see Figure 1, where B = {HERBERT, ACTION, PERCEPTION,
ARM, BASE}, σ(HERBERT) = {ACTION,PERCEPTION}, π(ACTION) =
{<ARM,BASE>}).
Statecharts are a particular subset of higraphs, namely the ones in which
the E relation corresponds to the transition function of an automaton. Some
of the more advanced constructs of statecharts, such as e.g. history-dependent
transitions, require an additional relation on blobs to be defined; however, we
will not concentrate here on the formal aspects of the language.
4
Figure 1: A higraph representing a hypothetical control system for a robot called
HERBERT. The ACTION blob is a cartesian product of ARM and BASE.
The basic property distinguishing statecharts from other tools for specifying
automata is their ability to easily capture hierarchies and abstractions. These
notions are illustrated in Figure 2. Figure 2 (a) presents a simple three-state
automaton (with incompletely specified transitions). Subfigure (b) presents a
possible hierarchical view of this automaton, where states A and C are treated
as a group of states (distinguished by the common transition β to the state
B) named D. Note the fact that an arrow from/to a blob corresponds to a set
of arrows from/to all its subblobs. Subfigure (c) illustrates possibility of abstraction, where state D is drawn without investigating its internal structure.
Subfigure (d) presents another concept introduced in statecharts, namely the
default arrows. A default arrow describes the state in which the system will
find itself in when a transition from the outside happens, and when more detailed information about the initial state is missing. So the outer default arrow
pointing to the blob D (in subfigure (d)) says that in case of ambiguity between
B and D the next state will always be D. The inner default arrow refers only
to the subblobs of D, so in case of a transition pointing to D (as e.g., α in our
example) the transition will occur to state A.
The next notion introduced in statecharts for simplifying specification of automata is a history-dependent default entry, illustrated in Figure 3. Its meaning
agrees with our intuition: the history entry causes a transition pointing to a
blob to enter the most recently visited state within the blob. Subfigures (a) and
(b) present two ways of using the history circle. In the (a) case it applies only
to the incoming α transition, in the (b) case any entry using default will choose
the most recent state. Obviously, when a blob is visited for the first time, the
default state is chosen. Note that a history entry applies only to the uppermost
level of a blob. So in subfigure (c), within G and F the usual default entry
rules will be applied. On the other hand, using H* entry instead of H, one can
5
Figure 2: Abstraction and hierarchization in statecharts.
force application of the history operator down to the atomic level. Subfigure
(d) illustrates usage of H*. Finally, subfigure (e) illustrates intermediate usage
of history entries, where entry to K and F is history-dependent, but within G
only the standard default is applied.
The next important notion we would like to present here is orthogonality.
It has already appeared in the formal definition of higraph in the form of the
function π. Its meaning is the following: in the set of the states of a system,
we can distinguish several subsets describing subsystems working in parallel. In
terms of the state space this means that each global state is a tuple of local
states of all subsystems, as illustrated in Figure 4. Subfigure (a) presents a
blob (Y) divided into two orthogonal subblobs (A and D). The corresponding
“flat” version of the state space is presented in subfigure (b). Note that the
number of states and transitions in a non-orthogonal description is exponential
with respect to the number of orthogonal components of the system.
Figure 4 (a) illustrates one more important feature introduced in statecharts:
a conditional transition. The label β(in G) means that the transition (in this
case from C to B) occurs only in case of input letter β appearing in the input
6
Figure 3: History-dependent entries in statecharts.
and the subsystem D being in state G.
The last concept which needs to be introduced here is the delay element
depicted in Figure 5. It is the only part of the statechart formalism actually
extending it beyond the limitations imposed by classical finite-state automata.
On the other hand, its power is exemplified by the fact that it is extensively
used in real-time-related applications, e.g., VLSI specification [9].
A timer with a label s < t denotes a state in which the automaton can be
for at least time s (what correspond to inhibiting the input event until delay s
has passed) and at most time t (when the timeout transition is taken, provided
no input events occur before that time).
Due to the introduction of timers the semantics of the full statechart formalism has to be defined using temporal constructs (such as e.g., trace semantics
or timed automata).
2.3
BL programs as statecharts
The correspondence between BL programs and statecharts will be established
by presenting how the language constructs of BL can be expressed using the
7
Figure 4: Orthogonality in statecharts.
Figure 5: A statechart timer.
statechart formalism. We will focus here on the most important fragments of
the translation procedure.
Let us begin with the basic element of BL, i.e., the real-time rule (rt-rule):
(whenever condition &rest body-forms)
It corresponds to a blob entered on condition. Depending on the kind of condition we can have either of the situations depicted in Figure 6:
t (subfigure a) the (sub)automaton body1 is entered every characteristic time;
monostable (subfigure b) the (sub)automaton body3 is entered when the monostable A is triggered and then is re-entered every characteristic time provided A is still triggered; (A similar construction can be done for the (not
monostable ) condition.)
(delay τ ) (subfigure c) the (sub)automaton body2 is entered every delay time;
(received? register ) the corresponding body blob is entered when the
named register receives a message;
not|and|or appropriate combination of the above.
The body of an rt-rule can contain a number of forms. Those forms have
the standard programming language flavor and meaning (operations on registers, iteration, if then else, output, triggering monostables) and do not pose
any particular problems with the translation (although the resulting statechart
might appear quite huge).
The next construct:
8
(a)
(b)
(c)
Figure 6: Translation of conditions of real-time rules.
(exclusive &rest whenever-forms)
corresponds to a state with separate substates for each rt-rule in the body
of exclusive. Because exclusive imposes the restriction on the control of
evaluation of its rt-rules (namely, the first rule with a satisfied condition has its
body evaluated), the blobs corresponding to the rt-rules have to be modified in
the following way. The timers are substituted by instantaneous timeouts, but
the output transition goes out of the body blob rather than from the timer.
In that way if the condition is not satisfied, the state is left immediately. The
states are sequenced according to their order of appearance in the exclusive
form.
Complex behaviors are constructed using the defbehavior facility:
(defbehavior name inputs outputs declarations rules).
A behavior corresponds to a state with its rt-rules as parallel sub-states (see
Figure 7). The input and output registers of behaviors can be connected by
wires. A connection can be represented in the statechart formalism by an internal event generated by the state change of the output register and affecting
the state change of the input register. To include the suppression and inhibition
mechanisms, this construct should be modified accordingly.
9
Figure 7: A behavior.
All the control regimes of the language are expressed (and implemented)
using numerical values contained in some special-purpose registers. Therefore
this part does not introduce any additional complexity (except the necessity to
define statecharts for those registers).
We hope that the presentation above gives sufficiently clear image of the
translation procedure. Having assumed this, we can conclude that any BL
program can be transformed to a statechart with timer elements capturing all
the temporal dependencies expressed in the BL program.
3
Some observations on emergent properties
The following piece of BL code is a naive (author’s) implementation of a (right)1 wall-following behaviour. The simple dependencies among the behaviours are
presented on Figure 8.
(defbehavior motor-control
:inputs (left-vel right-vel)
1 The robot used in the experiments, an R2E manufactured by IS Robotics, was heading
backwards. Therefore “left” in the program should be read as “right”, negative velocity is
really positive, etc. One of the names of the functions (turntosthgright) in the program has
also been changed to reflect the real behaviour of the robot.
10
:processes ((whenever (received? left-vel)
(whenever (received? right-vel)
(cond ((and (= left-vel 0)
(= right-vel 0))
(set-state :right-brake :on)
(set-state :left-brake :on))
(t (set-state :right-brake :off)
(set-state :left-brake :off)))
(done-whenever))
(set-motor :left-velocity left-vel)
(set-motor :right-velocity right-vel))))
(defbehavior move-forward
:outputs (forward-velocity)
:processes ((whenever t
(output forward-velocity -20)
(delay 0.5))))
(defbehavior stop-bumper
:outputs (velocity)
:decls ((sendit :monostable 0.5))
:processes ((whenever (< (get-bump-reading :back) 15)
(trigger sendit))
(whenever sendit
(output velocity 20))))
(defbehavior turn-away
:outputs (lv rv)
:processes ((whenever (or (< (get-ir-reading :right-back) 5)
(< (get-ir-reading :left-back) 5)
(< (get-bump-reading :left-angle-back) 15)
(< (get-bump-reading :left-side) 15))
(output lv 0)
(output rv 20))))
(defbehavior turntosthgright
:outputs (lv rv)
:decls ((closetosthg :init 0))
:processes ((whenever (and (> closetosthg 0)
(= (get-ir-reading :left-angle-back) 5))
(output lv 5)
(output rv -5)
(delay 2))
(whenever (/= (get-ir-reading :left-angle-back) 5)
(delay 30)
(setf closetosthg 1))))
(connect (move-forward forward-velocity) (motor-control left-vel))
(connect (move-forward forward-velocity) (motor-control right-vel))
(connect (stop-bumper velocity) (motor-control left-vel))
11
Figure 8: The wall-following behaviour.
(connect
(connect
(connect
(connect
(connect
(connect
(connect
(connect
(stop-bumper velocity) (motor-control right-vel))
(stop-bumper velocity) ((inhibit (move-forward forward-velocity))))
(turn-away lv) (motor-control left-vel))
(turn-away rv) (motor-control right-vel))
(turn-away lv) ((inhibit (move-forward forward-velocity))))
(turntosthgright lv) (motor-control left-vel))
(turntosthgright rv) (motor-control right-vel))
(turntosthgright lv) ((inhibit (move-forward forward-velocity))))
This particular BL program, as well as many others [14], relies on appropriately chosen delays. The values of the delays have been determined experimentally. After several compile-run-adjust cycles the apparent over-control of
the motors (the robot was shaking and jumping when it detected an obstacle)
has been removed. However, the control of the motors was probably still far
from optimal (it was determined solely on the basis of observation). Unfortunately, BL provides no support for more subtle (gradual) change of control
values. Moreover, the methodology advocated in [14] for this kind of control
tasks does not encourage the user to create control models in any way; instead,
one is asked to limit himself to “relatively simple reactions to some sensory
condition”.
Another observation made during this experiment was that depending on
the values of delays the wall-following behaviour could easily become either a
shaking-in-place behaviour (too frequent sampling of sensor values) or a longstraight-leaps behaviour, with almost 100%-certainty of losing track of a wall
(too long intervals). Again, this problem had to be resolved experimentally.
12
The long expected “emergence” of the wall-following behaviour occurred
in this example after a number of (more or less random) parameter (delay)
adjustments. Probably one can learn (to some extent) the art of behaviourbased programming, but the author would prefer a situation where the correct
design and implementation are the matter of observing strict rules guaranteeing
the correctness of the result. Behavior language, however, guarantees only that
if a (set of) behavior(s) can be specified using constructs of the language, then
it can be automatically compiled down to the level of AFSMs. No support is
provided for checking whether such a set of behaviors is contradictory or not
(either wrt the sensory input vs. expected motoric output, or wrt the internal
dependencies between interacting behaviors), or whether those behaviors can
actually help achieve the assumed goals, i.e., reveal the intended functionality.
The current methodology is in the author’s opinion only applicable to toy
domains (or, rather, toy robots in the real world), where there is plenty of room
for experimentation (i.e., playing with the robot), no time limitations (in case
of author’s experiments the debugging time was much more than the magical
90% of the whole software development part of the work) and the possible
cost of failure (damage of the robot) is not too high. However, in case of
more complex devices, with a larger number of implemented behaviours, more
expensive, and not easily available for experimentation, this methodology is far
from being useful. A solution would be to have some development environment
for behaviour programs, where the initial ideas could be tested in a simulated
environment, and logical dependencies within a program could be analyzed and,
if necessary, proven to be contradictory or incomplete. Such an environment
could resemble the tool developed for statecharts, namely STATEMATE.
4
Conclusion
In this paper we have briefly presented Behaviour Language (BL) used for programming subsumption systems. We have shown that a BL program is equivalent to some statechart, i.e., to some finite-state machine with explicit timing
introduced by delay elements. So the conclusion of the first part of the paper
is that BL does not offer any theoretical or conceptual advantages over existing, established formalisms (except the conciseness of representation). Then we
have presented a BL implementation of an emergent behaviour (for simplicity we
have chosen wall-following) and have argued that the result is actually obtained
through appropriate tuning of the delays introduced by the timers defined in
the controlling BL program.
Therefore our opinion is that emergent properties can only arise due to
the complex dynamics of interactions among the behaviours (what perfectly
coincides with the statements of Brooks [6, 7]), but also that this emergence is
to large extent accidental when using BL, as opposed to a systematic design of a
control system where the intended behaviour of the robot is used as a guideline
from the very beginning and, moreover, can be inferred back from the control
program. The established connection between BL and statecharts gives us hope
that the tools such as STATEMATE [12] would appear suitable for more formal
analysis of the behaviour emergence phenomena.
13
Acknowledgments
The author is grateful to Simin Nadjm-Tehrani, Per Österling, Erik Sandewall
and the anonymous referees for comments that helped to improve the paper.
The author would also like to thank Rodney Brooks and Maja Matarić for
explaining the intricacies of behavior language programming.
References
[1] Colin M. Angle. Genghis, a six legged autonomous walking robot. Master’s
thesis, MIT, 1989.
[2] Ronald C. Arkin. Integrating behavioral, perceptual, and world knowledge
in reactive navigation. Robotics and Autonomous Systems, 6:105–122, 1990.
[3] Rodney A. Brooks. A robust layered control system for a mobile robot.
IEEE Journal on Robotics and Automation, 2:14–23, 1986.
[4] Rodney A. Brooks. The behavior language; user’s guide. Memo 1227, MIT
AILab, April 1990.
[5] Rodney A. Brooks. Elephants don’t play chess. Robotics and Autonomous
Systems, 6:3–15, 1990.
[6] Rodney A. Brooks. Intelligence without reason. In Proceedings of the
Twelvth International Joint Conference on Artificial Intelligence, Sydney.
Morgan Kaufman, 1991.
[7] Rodney A. Brooks. Intelligence without representation. Artificial Intelligence, 47(1–3):139–159, 1991.
[8] Jonathan H. Connell. A colony architecture for an artificial creature. PhD
thesis, MIT, 1989. AI Lab Tech Report 1151.
[9] Doron Drusinsky and David Harel. Using Statecharts for Hardware Description and Synthesis. IEEE Transactions on Computer-Aided Design,
8(7):798–807, July 1989.
[10] David Harel. Statecharts: A visual formalism for complex systems. Science
of Computer Programming, 8:231–274, 1987.
[11] David Harel. On visual formalisms.
31(5):514–530, 1988.
Communications of the ACM,
[12] David Harel et al. STATEMATE: A working environment for the development of complex reactive systems. IEEE Transactions on Software Engineering, 16(4):403–413, April 1990.
[13] Leslie Pack Kaelbling and Stanley J. Rosenschein. Action and planning in
embedded agents. Robotics and Autonomous Systems, 6:35–48, 1990.
[14] Maja J. Mataric. Basic tips for programming in the behavior language.
Unpublished memo.
14
[15] Maja J. Mataric. A distributed model for mobile robot environmentlearning and navigation. Technical Report 1228, MIT AI Lab, May 1990.
[16] David W. Payton. Internalized plans: A representation for action resources.
Robotics and Autonomous Systems, 6:89–103, 1990.
15