An agent-oriented approach to change propagation in

Noname manuscript No.
(will be inserted by the editor)
An agent-oriented approach to change propagation in
software maintenance
Hoa Khanh Dam · Michael Winikoff
Received: date / Accepted: date
Abstract Software maintenance and evolution is a lengthy and expensive phase in
the life cycle of a software system. In this paper we focus on the change propagation
problem: given a primary change that is made in order to meet a new or changed requirement, what additional, secondary, changes are needed? We propose a novel, agentoriented, approach that works by repairing violations of desired consistency rules in a
design model. Such consistency constraints are specified using the Object Constraint
Language (OCL) and the Unified Modelling Language (UML) metamodel, which form
the key inputs to our change propagation framework. The underlying change propagation mechanism of our framework is based on the well-known Belief-Desire-Intention
(BDI) agent architecture. Our approach represents change options for repairing inconsistencies using event-triggered plans, as is done in BDI agent platforms. This naturally
reflects the cascading nature of change propagation, where each change (primary or
secondary) can require further changes to be made. We also propose a new method
for generating repair plans from OCL consistency constraints. Furthermore, a given
inconsistency will typically have a number of repair plans that could be used to restore
consistency, and we propose a mechanism for semi-automatically selecting between alternative repair plans. This mechanism, which is based on a notion of cost, takes into
account cascades (where fixing the violation of a constraint breaks another constraint),
and synergies between constraints (where fixing the violation of a constraint also fixes
another violated constraint). Finally, we report on an evaluation of the approach, covering effectiveness, efficiency and scalability.
Keywords Agent-oriented software engineering · Software maintenance and
evolution · Change propagation
Hoa Khanh Dam
University of Wollongong, Wollongong, Australia
Tel.: +61 2 4221 4875
Fax: +61 2 4221 4170
E-mail: [email protected]
Michael Winikoff
RMIT University, Melbourne, Australia and University of Otago, Dunedin, New Zealand
Tel.: +64 3 479 5362
Fax: +64 3 479 8362
E-mail: [email protected]
2
1 Introduction
A large percentage — as much as two-thirds — of the cost of any software can be
attributed to its maintenance: modifications to the software due to a range of causes1 ,
after the software has been written [1]. Although a significant amount of work has
been done, software evolution and maintenance is still a challenging problem for both
research and practice [2, 3]. This is due to the lack of theoretical foundations and empirical studies of software evolution and maintenance, as well as the inherent complexity
of software.
Agent systems, like conventional software systems, undergo changes during their
evolution to meet ever-changing user requirements and environment changes. If we
are to be successful in the development of agent-oriented systems that remain useful
after delivery, the research community must provide solutions and insights that will
improve the practice of maintaining and evolving agent systems. A substantial amount
of work in agent-oriented software engineering (AOSE) [4] has provided methodologies
for analysing, designing and implementing software systems in terms of agents [5, 6].
However, there has been very little work on maintenance and evolution of agent-based
systems [7, 8].
The main purpose of this work is to fill that gap. More specifically, our work tackles
the issue of change propagation, which is arguably a central aspect of software maintenance and evolution [3, 9]. A software system consists of various entities (e.g. agents,
plans, events, etc. in an agent-based system) and their dependencies (e.g. an incoming
message depending on plans to handle it). Different software engineering paradigms
or software systems may consist of different entities and dependencies. Suppose that
entity A depends on entity B . Then we say that a dependency is consistent when
what is being provided by B matches what entity A needs. When a software system is
changed as part of software maintenance, these changes affect the dependencies, and
may introduce inconsistencies (more precisely, inconsistent dependencies). Therefore,
once certain, initial, changes have been made (termed the “primary changes”), there
may be a need for further, “secondary”, changes to be made, in order to restore consistency of the dependencies in the system [10]. The change propagation process aims
to ensure consistency by identifying inconsistent dependencies, and making necessary
secondary changes [3]. Note that secondary changes also can affect dependencies, and
therefore may introduce inconsistencies that then need to be resolved through additional secondary changes.
Change propagation is very important in the process of maintaining and evolving
a software system [3, 9]. The software maintainer has to ensure that the change is
correctly propagated, and that the software does not contain any inconsistencies, since
inconsistencies can result in incorrect behaviour. However, change propagation is a
complicated and costly process [3, 11, 12] since there are many types of dependencies
between the entities in a software system. As a result, there is an emerging need to
have supporting tools and techniques which improve both the efficiency and the quality
of the change propagation process.
Much of the work that has been done in change propagation has focussed on the
code level (e.g. [10, 13–15]). The recent emergence of model driven development (e.g.
1 These are usually classified as being corrective maintenance, fixing bugs; perfective maintenance, adding new functionality; or adaptive maintenance, changing the system so it continues
to work in a changed environment.
3
MDA [16]) has better recognised the importance of models in the software development
process [17, 18]. Current modelling environments, however, do not adequately address
software evolution problems, which leads to an emerging need to deal with software
changes at the model/design level [19]. However, there has been very limited work to
deal with the change propagation issue at the design level [20]. In addition, design
models provide a more regularized and constrained description of the system which
makes it somewhat easier to deal with change propagation and consistency maintenance at the design level. These are the key motivations that drive the main focus of
our work on dealing with propagating changes through design models. We are however
also considering extending our approach, as part of our future work, to provide effective and automated support for propagating changes to maintain consistency across
requirement, design and implementation artefacts as a system evolves.
Our approach to change propagation is based on the conjecture that given a suitable
set of consistency relationships2 , change propagation can be done by finding and fixing
inconsistent relationships in a design model, which are caused by primary changes
made to the model. As a result, our work revolves around two main problems: the first
is establishing consistency relationships between design entities and models, and the
second is representing and implementing a mechanism for fixing inconsistencies caused
by changes made to design models.
A key novelty in our approach is the use of a BDI (Belief-Desire-Intention) [21]
representation to model ways of repairing inconsistencies in design models. Repair options are represented as BDI plans and fixing a constraint is considered as a goal/event.
The use of BDI-style, event-triggered plans, matches well with the cascading nature of
change propagation, where a change can cause other changes to be made. Further, there
are usually many ways of fixing a given inconsistency, and this is naturally captured
using multiple plans that respond to a given event. Such abstract repair plans are a
way to practically capture the large number of concrete ways of fixing inconsistencies
[22]. In addition, new (or alternative) ways to resolve an inconsistency can be added
via additional plans without changing the existing structure. Although we do not use
the full capabilities of a BDI agent, these two properties of change propagation make
the use of BDI plans natural and, we believe, well-motivated.
We have developed an agent-based framework to propagate changes by fixing inconsistencies in design models. In our framework, consistency relationships are conditions that are required in order for a design model to be valid, which can include
syntactic correctness, but also coherence between diagrams, and possibly even a formalisation of established and accepted design practices [23]. Such consistency constraints
are specified on Unified Modelling Language (UML) [24] metamodels using the Object
Constraint Language (OCL) [25].
In developing our framework and implementing it we have had to address two key
issues:
1. Where do the BDI repair options come from?
2. How do we select amongst the relevant repair options?
The first issue we addressed by developing a repair plan generator that is able to
automatically produce repair plans for a given constraint. By avoiding having to develop
repair plans by hand we reduce the amount of effort, and are able to ensure that the
2 Consistency relationships refer to some relationship that should hold between model elements.
4
collection of repair plans is complete and correct. The second issue we addressed by
defining a suitable notion of repair plan cost that incorporates both conflict between
plans, and synergies between plans. We then developed an algorithm, based on this
notion of cost, that finds cheapest options and proposes them to the user.
Our framework has been implemented as the Change Propagation Assistant (CPA),
a tool that demonstrates the applicability of our approach. The tool is integrated
with the Prometheus Design Tool (PDT)3 [26], a modelling tool that supports the
Prometheus methodology for building agent-based systems.
Although the CPA tool, a prototypical implementation of our change propagation
framework, is specific to the Prometheus methodology, the framework is generic in that
it can be applied to a range of design types. As will be seen, the choice of methodologies
or design types is not crucial: we describe a change propagation framework which is
generic, and which could be used with other methodologies, and with other design
artefacts. In fact, we have performed a case study to show how our approach can
effectively support change propagation in object-oriented UML design models [27]. In
addition, recent work [28] has extended the framework to support change propagation
in enterprise architectures. This paper however focuses on an agent-oriented setting,
as this is most relevant for this journal and audience.
It is important to note that unlike the choice of methodology, the use of BDI
plans to represent repair options is essential since our framework makes use of various
properties of the BDI representation, as discussed earlier, to gain advantages over more
traditional approaches to change propagation.
The remainder of this paper is structured as follow. In Section 2 we provide a
brief overview of weather alerting case study, including brief coverage of the necessary
parts of the Prometheus methodology. Our change propagation framework is then
presented in Section 3. We then present in detail one of the key and novel components
of our change propagation framework: the repair plan generator (Section 4). Section
5 addresses the question of how to select amongst the repair options to fix a given
constraint. Section 6 describes the results of an empirical evaluation that was performed
in order to assess the efficiency, effectiveness and scalability of our change propagation
framework generally and the CPA tool in particular. We discuss related work in Section
7 before concluding and outlining our future work in Section 8.
2 Case Study: The Weather Alerting System
The change propagation framework presented in this paper has been realised in the
context of the Prometheus agent-oriented methodology. The Prometheus methodology
is a prominent agent-oriented software engineering methodology which has been used
and developed over a number of years. The methodology is complete, described in
considerable detail, and has tool support. This section presents the essential relevant
parts of the methodology in the context of a case study of a weather alerting system [29].
In order to keep this paper focussed we keep coverage of Prometheus to a minimum.
For further details we refer the reader to [30].
The case study that we use is a simplified version of an actual application developed using Prometheus and reported in [29]. The simplified version we use is taken from
the work in [31] and from teaching materials developed by the RMIT Agent Group4 ,
3
4
http://www.cs.rmit.edu.au/agents/pdt
http://www.cs.rmit.edu.au/agents
5
with some modifications by ourselves. The weather alerting system aims to support
forecasters by alerting them to a range of situations, such as when a discrepancy develops between a previously-issued forecast, and the subsequently observed weather
conditions.
The remainder of this section describes the design of the weather alerting application, as captured using (some of) Prometheus’ models.
As part of the Prometheus system specification phase the roles in the system are
identified by grouping related goals, and this information is captured in a role diagram.
This diagram is used to capture the roles, and their percepts, actions and goals.
Fig. 1 A role diagram for a weather alerting system
An example role diagram for the weather alerting system is depicted in Figure
1. There are six different roles in the system: the “Manage Subscription” role is for
storing active subscriptions and delivering alerts to the relevant subscribers; the “User
Interaction” role is for receiving change subscription requests from the user, displaying messages such as alerts, and confirming subscription updates; the “Manage TAF5
Data” and “Manage AWS6 Data” roles are for managing each of these data types
available in the system; the “Check Discrepancies” role is responsible for detecting discrepancies between AWS and TAF data; and the “Filter Alerts” role is for combining
alerts for a given GUI which occur almost at the same time, determining whether to
deliver an alert and controlling the frequency of alerts sent to the user.
Agent types are derived as groups of one or more roles. The choice of grouping
is guided by considerations of coupling and cohesion. Figure 2 shows how roles are
grouped into agents in the weather alerting system. In this case, each role is assigned
to an agent except that the “Manage Subscription” and “Filter Alerts” roles are played
by the “Alerter” agent.
The way in which the agents communicate is captured in a system overview diagram, which shows how the system as a whole is structured. This type of diagram
5 TAFs stands for terminal aerodrome forecasts, highly abbreviated forecasts of weather
around airports intended for pilots [29].
6 AWSs stands for automated weather stations.
6
Fig. 2 An agent role grouping diagram for a weather alerting system
Fig. 3 System overview diagram for a weather alerting system
shows the agent types, the communication links between them, and data. It also shows
the system’s boundary and how it interacts with its environment in terms of actions,
percepts, and external data. It is one of the key outcomes of Prometheus’ architectural
design phase, since it provides a high level map of the system’s structure.
For example, Figure 37 shows the system overview diagram for the weather alerting system. It shows how the five agents types in the system interact with each other.
“TAFManager” and “AWSManager” agents process “TAF” and “AWS” percepts respectively, and store the relevant data which can be accessed by the “Discrepancy”
agent. This agent sends “AlertDiscrepancy” messages to the “Alerter” agent, which
uses the “SubscriptionStore” data to work out which GUI agents should receive discrepancy alerts. A “GUI” agent can also register for alerts by sending a “SubscribeChange”
message to the “Alerter” agent.
The detailed design phase of Prometheus is concerned with defining the internal
structure of each agent and how it will accomplish its goals within the overall system.
The resulting detailed design is captured using agent overview diagrams and capability
overview diagrams. These capture the structure of the capabilities, sub-capabilities,
7 An arrow from an agent to a data means that the data is written by the agent whilst an
arrow from a data to an agent means that the data is read by the agent. An arrow from an
agent to a message implies that the agent sends the message whilst an arrow from a message
to an agent implies that the agent receives the message.
7
plans, events and data within the agent. For example, Figure 48 shows the internals of
the “Discrepancy” agent, which contains several plans to achieve its goal of detecting
discrepancies. The “HandleAwsDataPlan” plan is triggered by an “AWSData” message, sent by the “AWSManager” to inform it that a new AWS has just arrived. This
plan then retrieves TAF and AWS data from “TAFDataStore” and “AWSDataStore”
respectively and processes them. It may also request a new AWS by sending a “RequestNewAWS” message to the “AWSManager” agent. Discrepancy detection for each data
type and sending alerts to the “Alerter” agent are done by “CheckTempDiscrepancy”
and “CheckPressDiscrepancy” plans.
Fig. 4 Agent overview diagram for the “Discrepancy” agent
Most of the Prometheus methodology does not assume any particular agent architecture. However, the lowest layer of plans and events does assume that the target agent
architecture is plan-based (which is the case for BDI agent platforms). This could easily
be exchanged for an alternative architecture if desired, but a detailed design which is
close to code must make some assumptions about the implementation architecture.
3 Change Propagation Framework
This section briefly describes our agent-based change propagation framework and its
components, including plan representation. The framework provides a “change propagation assistant” that helps a designer by suggesting additional (secondary) changes
once primary changes have been made. This framework is generic in that it can be
applied to various software engineering methodologies, and we have applied it to both
UML [32] and Prometheus [30].
The four key data items9 we deal with are a meta-model, a collection of wellformedness constraints, an application design model, and a collection of repair plans.
The overall process is (see Figure 5):
8
9
A dashed arrow from a message to a plan implies the message triggers the plan.
A fifth data item is the basic costs of actions which is discussed in Section 5.1.
8
1. At design time the repair plans are automatically generated from the constraints
and meta-model (see Section 4).
2. At runtime (shaded box in Figure 5) we check whether the constraints hold in the
design model.
3. We use the repair plans to generate plan instances (i.e. repair options) for the
violated constraints.
4. We calculate the cost of the different repair plan instances (see Section 5).
5. We select a repair plan instance by picking the single cheapest, if it exists, or by
asking the user.
6. The selected repair plan instance is executed, and it updates the application design
model.
Modification
(e.g. change context conditions,
remove plan types,
change plan body)
Repair Plan
Library
Repair plan types
Repair
administrator
1. Generate
repair plan
types
OCL constraints
Repair plan types
(Section 4)
Metamodel
OCL constraints
Constraints
Repository
Metamodel
Metamodel
OCL
constraints
2. Check
constraints
Repair
administrator
Violated constraints
3. Generate repair
plan instances
Basic cost
values
Application’s
model
model
Basic costs
changes
Chosen repair plan instances
User selection
5. Select one
plan to
execute
User input
Repair plan instances
6. Execute plan
Software designer
Plan instances with least cost
Software designer
4. Calculate cost
(Section 5)
Fig. 5 Change propagation framework
We now briefly describe each of the four key data items. A model describes a system
using a well defined (modelling) language, which has clear syntax and semantics suitable
for both human and computer interpretation [16]. For instance, a UML model is written
in UML, which contains concepts such as classes, methods, objects, etc. On the other
hand, a Prometheus model is written in a language that contains concepts specific to
agents such as goals, plans, events, etc. Metamodelling is a widely used mechanism to
define such a language [33]. A metamodel is also a model which describes the abstract
syntax of a modelling language, i.e. a definition of all the concepts and the relationships
existing between concepts that can be used in that language. A model which conforms
to its metamodel is considered a well-formed model.
A diagram provides a certain view of the model elements in a model. For instance,
a UML class diagram depicts a structural view of a model whereas a sequence diagram
9
represents a dynamic view of the same model. Different views of a model are all written
in the same language. The multi-view approach is widely used in software engineering
due to several reasons [24, 34]. A complex system usually has many aspects such as how
its components are structured, how they interact with each other or how data flows
and is processed. In addition, different stakeholders may have different viewpoints of
the system, e.g. the customer is more interested in the system’s functionalities whereas
the focus of a designer might be the internal structure of the system. As a result, no
single diagram is sufficient to represent the whole system’s model and consequently a
range of different views is needed to provide multiple perspectives of the system under
development.
It is very important that each view of a model must be both syntactically and semantically consistent [35]. Syntactic consistency ensures that a model’s view conforms
to the model’s abstract syntax, i.e. a metamodel. On the other hand, semantic consistency requires different views of a model to be semantically compatible (i.e. coherence).
For instance, the message calling direction in a UML sequence diagram must match the
class association direction in a class diagram. Or, for a Prometheus design such as the
Weather Alerting System described in Section 2, when the system overview diagram
(Figure 3) indicates that a given agent (e.g. “AWSManager”) receives a certain percept
(e.g. “AWS”), then the agent should have plans for dealing with the percept. Similarly,
when a role (e.g. “Manage TAF Data”) handles a certain percept, then the agent that
performs that role (in this case “TAFManager”) should also handle that percept.
Such consistency requirements upon a model are often expressed using its metamodel and a set of constraints that specify conditions that a well-formed and consistent
model should satisfy. Constraints may describe syntactic and semantic relationships
between model elements. They may also be used to prescribe coherence relationships
between different views of a model, i.e. intra-model or horizontal consistency as defined
in [35]. Our approach is also applicable to constraints that express consistency between
different models (referred to as inter-model or vertical consistency by Spanoudakis &
Zisman [35]) provided that those models are defined based on the same metamodel.
In addition, constraints can be used to impose good practices or industry standards
on designers, e.g. a constraint requiring that all constructors of a class are declared
private and that the class provides static creation operations (factory pattern) [36]. Finally, constraints may be used to describe specific requirements related to a particular
domain, e.g. there could be a constraint that all agents in the system need to subscribe
to a particular agent.
We use the Object Constraint Language (OCL) [25] to specify constraints. OCL is
part of the UML standards which is used to specify invariants, pre-conditions, postconditions and other kinds of constraints imposed on elements in UML models. In
the OCL notation “self ” denotes the context node (in the case of constraint 1, below,
a Percept) to which the constraints have been attached. An access pattern such as
“self.agent” indicates the result of following the association between a percept and an
agent (in the metamodel), which is, in this case, a collection of agents which handle
the given percept. OCL also has operations on collections such as “SE→includes(x)”
stating that a collection SE must contain an entity x , or “SE→exists(c)” specifying
that a certain condition c must hold for at least one element of SE , or “SE→forAll(c)”
specifying that c must hold for all elements of SE . More detailed information on OCL
can be found in [25]. One of the strengths of OCL is that it is carefully designed to be
both formal and simple, i.e. its syntax is very straightforward. Therefore, expressing
design constraints in OCL is a relatively simple task. In addition, note that many,
10
possibly even most, constraints can be reused between projects, which means that
defining constraints is done once for a given methodology, not for each project. For
instance the UML standard defines a range of constraints that well-formed UML models
must satisfy. The constraints for Prometheus are almost entirely generic (i.e. nondomain specific) and can be re-used between projects.
For example, the following constraint (with regard to the metamodel in Figure 610 ),
states that: considering the set of agents that handle a given percept (self.agent), for
each of the agents (a) if we consider the plans of that agent (a.plan), then at least one
of these plans (pl ) must include the current percept (self ) in the list of percepts that
it handles (pl.percept).
Full diagram
*
Agent
+lifetime:String
1..*
+initialisation:String
+demise:String
+cardinalityMinimum:Integer 1..*
+cardinalityMaximum:Integer
*
*
1..*
*
2..*
* *
1..*
*
Action
*
*
*
*
*
+ parent
*
Goal
+parameters:String
*
+durationDescription:String
+failureNotificationDescription:String
*
+partialChange:String
+sideEffects:String
Class_2
Data
+orRefined:Boolean
1..*
+ subGoals
*
*
*
Role
*
*
+ role
*
*
*
Step
1..*
<->
+dataType:String
+includedFields:String
* +externalToSystem:Boolean
+initialisation:String
+writtenWhen:String
1..*
*
*
*
*
*
{ordered}
*
*
*
*
+ triggeredPlan
+context:String
+failure:String
+failureRecovery:String
+procedure:String
*
Scenario
Protocol
*+ receivedMessage
1..*
*
Plan
+priority:Integer
+variation:String
1..*
+trigger:String
*
+ planReceiver
1..*
+ planSender
*
+ triggerMsg
Message
*
* +purpose:String
+informationCarried:String
*
+ incoming
*
+ outgoingMsg
*
*
+ sentMessage
*
+ triggeredPlan *
*
*
Actor
*
**
plan <-> percepts
*
Percept
*
+source:String
+informationCarried:String + triggerPercept
+knowledgeUpdated:String
* +processing:String
*
+frequency:String
1..*
+ percepts
*
GoalStep ActionStep
ScenarioStep
+ plan
*
*
OtherStep
PerceptStep
*
Fig. 6 Prometheus Metamodel (Excerpt)
Constraint 1 Any agent that handles a percept must contain at least one plan that is
able to handle the percept.
Context Percept inv:
self.agent→forAll(a : Agent | a.plan→exists(pl : Plan | pl.percept→includes(self )))
We have briefly described models, meta-models, and constraints. We now consider
the fourth key data item: repair plans. In our framework a goal to repair a violated
constraint (an inconsistency) is represented as an event and the ways of fixing the
violated constraint are represented as repair plans. These repair plans are represented
using BDI representation, namely event-triggered plans, where a given event may have
a number of plans that it could trigger, and where each plan also has a context condition
that specifies whether it is applicable in a given situation. Our concrete representation
for plans (described in Section 4.1) is an extension of AgentSpeak(L) [37].
10 This figure shows that a percept is received by a number of agents; that an agent has a
number of plans; and that a plan can handle a number of percepts.
11
For example, consider the constraint “constr(A, P)”: an agent A should have at least
one plan that handles a given percept P (this constraint is a sub-constraint of constraint
1). As can be seen in Figure 7, there are several different plans that could be used to
repair the constraint, and are triggered by the same “Repairing constr (A, P )” event.
The first repair plan, which is only applicable if agent A has an existing plan Pl , makes
Pl handle percept P . The second repair plan associates agent A with an existing plan
Pl and makes Pl handle percept P . The association is made under the premise that
Pl is not already associated with A. The third repair plan is similar to the second
repair plan except that it creates a new plan instead of using an existing one. It is
important to note that those repair plans are plan types. At run time, each plan type
can generate multiple plan instances depending on its context condition. For instance,
the first repair plan in Figure 7 can generate various repair plan instances, one for each
of the existing plans of agent A. Note that these three repair plans are not meant to
be exhaustive: there are methods for repairing the constraint other than these three.
Section 4.4 shows the complete set of plans for repairing this constraint.
Plan #1: Use existing plans of the agent
Triggering event: Repairing constr (A, P )
Context condition: Pl is a plan of agent A
Plan body:
1. Connect plan Pl and percept P .
Plan #2: Use existing plans not belonging to the agent
Triggering event: Repairing constr (A, P )
Context condition: Pl is a plan not associated with agent A
Plan body:
1. Connect agent A and Pl.
2. Connect plan Pl and percept P .
Plan #3: Create a new plan
Triggering event: Repairing constr (A, P )
Context condition: None
Plan body:
1. Create plan Pl.
2. Connect agent A and Pl.
3. Connect plan Pl and percept P .
Fig. 7 Example of repair plans for fixing constr (A, P )
Suppose that in the Weather Alerting System the “GUI” agent is meant to handle the “ChangeSubscription” percept, but does not have any plans that handle this
percept. In this case the constraint described above is violated, and the example plans
in Figure 7 offer the following options to repair this violation: Plan #1: connect an
existing plan of the “GUI” agent to the “ChangeSubscription” percept; Plan #2: select
an existing plan not already in the “GUI” agent, add it to the “GUI” agent, and make
it handle the percept; Plan #3: create a new plan, add it to the “GUI” agent, and
make it handle the “ChangeSubscription” percept.
12
3.1 Propagating Changes Involving Timing
Before we discuss how repair plans are generated, we consider change propagation
in relation to timing issues. Agent systems are distributed and concurrent, and it is
entirely possible for changes that are made to an agent system to result in timing
problems, either in the form of certain deadlines not being met, or in the form of a
desired sequence of events no longer being followed.
The reasoning that can be done at the design level depends, obviously, on the
level of information that is captured. If the design includes information on deadlines,
then it may be possible to perform temporal reasoning to detect such cases. However,
this is likely to be beyond the scope of design-level change propagation for two reasons.
Firstly, AOSE design notations may not include information on deadlines, and secondly,
in addition to knowing about deadlines, the temporal reasoner would also need detailed
information about the timing of the internal behaviour of agents, which may not be
available at the design level. Since the change propagation process doesn’t run the
system (it cannot: it operates with designs), it could be argued that checking for
timing and sequencing is in fact better done by observing an implemented system
and detecting differences between the design (e.g. interaction protocols) and the actual
behaviour [38, 39].
However, there are some forms of timing that are typically captured at the design
level, such as the ordering of messages between the agents (expressed as interaction
protocols). Reasoning about changes with respect to interaction protocols can be done
at the design level. We now briefly give an example to illustrate the sort of timingrelated reasoning that can be done within the change propagation framework described
earlier in this section.
Suppose that the weather alerting agent system design described in Section 2 included the protocol of Figure 8. This protocol shows that an initial percept (“AWS”)
received by the “AWSManager” agent is followed by an “AWSdata” message (from
the “AWSManager” agent to the “Discrepancy” agent). This may be followed by some
number of requests for more AWS data (“RequestNewAWS”), each with an associated
response (“AWSdata”). Finally, there may be an alert, which involves an “AlertDiscrepancy” message from the “Discrepancy” agent to the “Alerter” agent, and a subsequent
“NewAlerts” message from the “Alerter” agent to the “GUI” agent11 .
sd Alerting
AWSManager
Discrepancy
Alerter
GUI
Percept: AWS
AWSdata
Loop (0-N)
RequestNewAWS
AWSdata
Optional
AlertDiscrepancy
NewAlerts
Fig. 8 Interaction Protocol in Weather Alerting Application
11
This protocol does not allow for alert filtering or merging.
13
It is possible within the change propagation framework that we have described to
define constraints that involve message sequences in interaction protocols. We outline
two example constraints below (but do not give formalisation in OCL).
One possible constraint states that whenever an interaction protocol includes a
message M sent from agent S to agent R, then: agent S must be able to send the
message M , agent R must be able to receive message M , agent S must have a plan
that sends message M , and agent R must have a plan that receives message M .
A more complex constraint might state that if the protocol requires that message
M1 (received by agent R) is always followed by message M2 (sent by the same agent
R), then: agent R must have at least one plan P that receives R, such that one of the
consequences of P is to send M2 . The notion of consequences can be defined recursively12 : sending a message M is a consequence of a plan P if either P sends M , or if P
sends a message M 0 which is handled by a plan P 0 where sending M is a consequence
of P 0 .
Given these constraints, the change propagation process is able to assist with a
range of inconsistencies that relate to the order of events in the interaction protocol.
Suppose the Loop in the protocol has just been added, and we have not yet added
the “ObtainNewAWSData” plan (or the associated event “NewAWSDataNeeded” and
message “RequestNewAWS”) to the “Discrepancy” agent (Figure 4). Then the first
constraint outlined above would be violated, in other words, the interaction protocol
indicates that the “Discrepancy” agent should be able to send the “RequestNewAWS”
message, but there are no plans which do so. The repair plans for this violated constraint
would include an option that added a plan to the “Discrepancy” agent that sends the
“RequestNewAWS” message.
At this point the first constraint is satisfied, but the second constraint is violated:
although the “Discrepancy” agent is able to send the “RequestNewAWS” message, it
does not do so in response to receiving an “AWSdata” message, as required by the
interaction protocol. Thus the second constraint is violated. The options for repairing
this constraint include an option to create an (internal) event that links the “HandleAwsDataPlan” and “ObtainNewAWSData” plans.
4 Repair Plan Generation
There can be a substantial number of consistency constraints in the context of design
models. For instance, we have identified nearly 50 consistency constraints that can
be imposed on Prometheus design models. In UML 1.4.2 [24], the number of wellformedness constraints is over 100. A consequence of large numbers of constraints
is an even larger number of repair plans. In these cases, hand-crafting repair plans
for all constraints becomes a labour intensive task. In addition, it is difficult for the
repair administrator to know if the set of repair plans which they create is of high
quality and not faulty, meaning that in practice they should be complete and correct.
Therefore, we have developed a translation schema that takes constraints, expressed
as OCL invariants, as input and generates repair plans that can be used to correct
constraint violations. In this section we discuss the repair plan generation in detail. We
first describe the abstract syntax of repair plans, and then present the plan generation
12 This definition does not cater for processes that involve actions and percepts: performing
an action that results in a subsequent percept is a form of consequence that isn’t captured in
Prometheus designs.
14
::= E [: C ] ← B
::= Ct | Cf | +(x , SE ) | −(x , SE )
::= C ∨ C | C ∧ C | ¬ C |
∀ x • C | ∃ x • C | Prop | Ask (userVal, guidance)
B
::= RepairAction | !E | B1 ; B2 |
if C then B | for each x in SE B | true
RepairActions ::= Create e : t | Delete e | Connect e1 and e2 (w .r .t r) |
Disconnect e1 and e2 (w .r .t r) | Change attr of e to val
P
E
C
Fig. 9 Repair plan abstract syntax
rules. Finally, we explain and prove the correctness and completeness of the translation
schema.
4.1 Repair plan syntax
The syntax for repair plans (see Figure 913 ) is formally defined based on AgentSpeak(L)
[37]. Each repair plan, P , is of the form E : C ← B where E is the triggering event;
C is an optional “context condition” (Boolean formula) that specifies when the plan
should be applicable14 ; and B is the plan body, which can contain sequences (B1 ; B2 )
and events which will trigger further plans (written as !E ). We extend AgentSpeak(L)
by also allowing the plan body to contain primitive repair actions, and to contain
conditionals and loops. We also cover the case when the plan does nothing in which
its plan body has only one construct called true. In addition, the context condition
can contain an action Ask (userVal , guidance) indicating that the user should be asked
to provide (following the hints provided in the guidance) a value for userVal . For
example, the user may be asked to provide a value for a new attribute. Furthermore,
we define an event which can be: making a constraint true (Ct ), making a constraint
false (Cf ), adding an entity to a derived set (+(x , SE )), or removing it from a derived
set (−(x , SE )). Addition and deletion for derived sets (e.g. union sets) are represented
as events because they do not usually involve a primitive action. Rather, they require
a number of different actions, for example two deletions might be needed in the case
of removing an entity from the union of two sets.
A plan’s body contains repair actions that have effects on a model. We abstractly
view a (design) model as consisting of a set of model entities and a set of relationships
between entities. Model entities also have attributes. We consider attributes that have
primitive types instead of referencing types, which are regarded as relationships. This is
because we are working at the design level (not at the level of programming languages
such as Java). The OCL supports four basic primitive types (Boolean, integer, real and
string), and these are taken as the attribute types. Other types (e.g. List) are modeled
as relationships in the design. Each model entity is an instance of a metamodel element,
which is also referred to as the entity’s type. For instance, valid types in a Prometheus
design model can be agent, plan, event, etc. A model entity is defined by a unique
13
“Prop” denotes a primitive condition such as checking whether x > y or whether x ∈ SE .
In fact when there are multiple solutions to the context condition, each solution generates
a new plan instance. For example, if the context condition is x ∈ {1, 2} then there will be two
plan instances.
14
15
identifier (entity-id) and a type (entity-type). The set of entity-id and entity-type pairs
in a design model is denoted as entities. For example, a model containing an agent and
a plan has entities = {(e12, Agent), (e14, Plan)} where e12 and e14 are the unique
identifiers of the agent and the plan respectively. An entity can only have a single type.
A design model also contains a set of relationships between entities. It is noted that
there can be more than one relationship between two entities. For example, between an
agent and a message, in Prometheus, there can be three different types of relationships:
an agent sends a message, an agent receives a message, and an agent owns a message.
In addition, a relationship may involve more than two entities, i.e. n-ary relationships.
However, n-ary relationships are much less common than binary relationships, and
we can break an n-ary relationship into multiple binary relationships. Therefore, our
focus is on binary relationships. We formally define a relationship as a triple: entity ID
(source), relation ID, and entity ID (destination) and relationships is the set of those
triples. For example, a model with an agent which owns a plan has relationships =
{(e12, r1, e14)} where r 1 is the unique identifier for the relationship between an agent
and its plan. We also formally represent the value of an entity’s attribute as a value
function attrValues from entity ID and attribute ID to a single value (e.g. integer,
string), e.g. the name attribute of the entity agent1 has the value “Monitor Agent”.
There are five types of primitive actions that are used to update the model: creation and deletion of entities, adding and removing relationships between entities (corresponding to connection and disconnection respectively), and modifying the values of
attributes of entities. Each of them is formally defined below:
– A creation has two inputs: a unique identifier of a newly created entity (e) and its
type (t). This action has a precondition that the new e should be unique. The post
condition is that the set of entities is updated with the new entity.
– Deletion also requires two inputs: the entity to be deleted and its type. The precondition states that this entity to be deleted should exist. The postcondition is
that the set of entities is updated excluding the entity to be removed. All attributes
belonging to the entity to be removed are also deleted. In addition, the set of relationships is updated to exclude any relationships involving the entity to be deleted.
– A connection has three inputs corresponding to two entities to be connected and
a relationship between them. Its precondition requires that the two entities should
exist. Its postcondition requires the set of relationships to be updated with the new
relationship.
– A disconnection also has three inputs corresponding to two entities to be connected
and a relationship between them. A disconnection is equivalent to removing a relationship from the set of current relationships in a design model. Its precondition
requires that the relationship should exist.
– Modification involves an entity (e), its attribute (attr ) and a new value (val ). It has
a precondition requiring that the entity should exist and a postcondition requiring
the entity’s value for attribute attr be changed to val .
4.2 Automatic repair plan generation: issues and solutions
The core part of the repair plan generator is a translation that takes an OCL constraint
(and metamodel) as input and generates repair plans that can be used to correct
constraint violations. Such a translation can be developed by considering all the possible
ways in which a constraint can be false, and hence all the possible ways in which it
16
can be made true. However, there may be a large number of concrete ways of fixing a
constraint violation. In some cases this number can be infinite. For instance, consider
a constraint requiring a particular set SE to be non-empty. Assume that SE is empty,
then the various ways of fixing this constraint are adding 1 element, adding 2 elements,
adding 3 elements, and so on. Another example is a constraint requiring that the age
(attribute) of a person (model entity) has to be greater than 18. There is also an infinite
number of ways to fix this constraint, each of which corresponds to changing the age
to a number greater than 18. Such issues are due to the inherent characteristics of first
order logic that OCL is based on.
We do two things to deal with the possibility of an infinite number of options: (1)
we do not handle the full OCL language; and (2) we use user input. There are certain
OCL constructs that we do not include, and some of these are excluded precisely
because they result in an infinite, or finite but very large, number of repair options.
For instance, we do not include in our repair plan translation schema the sum operation
that adds all elements in a set. Propositions derived from this operation (e.g. the sum
of a set being equal to a given number) can have an infinite number of equivalent
ways to repair (e.g. changing the value of a certain set members). In such a case,
user intervention would be very complicated and therefore the repair plans can not be
automatically generated. Secondly, in certain cases we are able to reduce the number
of repair options dramatically by consulting the user, for example, a repair plan to
ensure that x 6= y asks the user to provide a value to be used for y that is different to
the current value of x . This can be seen as a form of abstraction.
In summary, we have addressed these issues when developing the repair plan translation in the following ways:
– Generated repair plans abstractly represent certain classes of concrete ways of fixing
a constraint. For example, plan ct : x ∈ SE ←!c1t (x ) represents all the repair plans
that make constraint c true, each of which corresponds to picking an element x in
the set SE and making c1(x ) true.
– Generated repair plans are minimal in that all of their actions contribute towards
fixing a certain constraint, i.e. taking out any of the actions results in failing to fix
the constraint. For example, the generated repair plan for making an empty set be
not empty is adding only one element to the set.
– We exclude certain problematic OCL constructs (e.g. sum).
– There is a type of repair plan that involves user interaction in which the user is
asked to provide an input. For example, in the above example the user is asked to
input an age value that is greater than 18.
Note that user input (e.g. providing a name for a newly created entity) is only
required when the repair plan is actually applied. That is, although there may be
many repair options, the user’s input is only required for the one repair option that is
selected and run.
4.3 Plan generation rules
In this section, we present the rules which can be used to generate repair plans for a
given consistency constraint expressed in OCL. Our rules cover a large part of OCL,
including all of the constructs that are used in the Prometheus consistency constraints.
17
With respect to the UML, most of the consistency constraints are (or can be, refer to
[24]) defined using those forms.
Specifically, our plan generation rules address the following OCL expressions: navigation, constraints on attributes (e.g. <, >, <>, and =), constraints on Boolean-valued
set expressions (e.g. forAll , exists), constraints on non-Boolean-valued set expressions,
Boolean connectives (e.g. and, or, xor, not etc.), addition or deletion involving derived
sets (e.g. adding an element to SE1 ∪ SE2 ), and the four basic data types. Constructs
that are not supported include the iterate and sum operations. While the former is
relatively complicated and requires future work, propositions derived from the latter (i.e. the sum operation) can have an infinite number of equivalent ways to repair
and consequently user intervention would be very complicated. Hence, they can not
be automatically generated. Although we do not have rules for the bag and sequence
collection types, similar techniques can be used to derive repair plans for them. Additionally, since we use OCL to specify consistency constraints, i.e. invariants, we have
not covered constraints on operations or pre- and post-conditions. Fully supporting all
forms of OCL expressions is part of our future work.
It it noted that consistency constraints describe properties that must be true about
the application model. Such a constraint, however, can contain many individual clauses
(i.e. sub-constraints) that can be true or false, e.g. logical connectives such as not, xor ,
etc. Hence, we need to consider repair plans for making each possible clause true or false,
which can then be combined to form repair plans for making the top-level constraint
def
true. For example, repairing constraint c1 = not c2 requires us to look for assignments
that make sub-constraint c2 false. Therefore, our plan generation has rules that make
a constraint true ct (under the assumption that the constraint has been violated) and
rules that make a constraint false cf (under the assumption that the constraint will
evaluate to true). We now explain the plan generation rules, for both ct and cf , for
each of the supported basic OCL propositions. The translation for this group of rules
is defined as a function P that takes an OCL constraint as input and returns a set
of specific plans that are based on the constraint form. Due to space limitations, we
present here an excerpt of the generation rules, and focus on the principles behind the
rules. For the complete rules see [40].
The basic idea behind the rules is that each constraint is analysed, and all possible
ways of repairing it are generated. This basic idea was previously proposed by Nentwich
et al. [22, 41] who derived repair actions from constraints. However, although the basic
principle of deriving repair options from constraints is the same, there are significant
differences in the details. Firstly, and most importantly, our work deals with the repair
of multiple constraints which can interact (e.g. repairing one constraint can also, as
a side-effect, repair another constraint). Secondly, we use a richer representation for
repair plans. Finally, our constraints are in a richer language, which is an industry
standard (OCL). For instance, xlinkit does not support set expressions that result in
new sets. On the contrary, OCL provides a wide range of such operations on a set such
as returning a subset of a set that contains elements for which a given constraint holds
or does not hold (select and reject), or returning a set containing all the elements that
are either in one set or the other but not in both (symmetricDifference), or returning
a union or intersection of two sets (i.e. union and intersection). OCL also provides an
if–then–else expression which is not supported in xlinkit. Furthermore, unlike xlinkit,
OCL supports not only sets but also other types such as bags and sequences.
18
As mentioned, the principle in generating repair plans is that for a given constraint
form we generate all methods of repairing the constraint. For example, consider a
constraint of the form ∃ x ∈ e.c(x ) which is false in a given model. In order for the
constraint to be false there cannot exist an x in e for which c(x ) is true, and to make
the constraint true we must change the model such that there exists an x in e for which
the sub-constraint c(x ) holds. This change is both necessary and sufficient. We then
consider how to generate plans by looking at all the possible ways of ensuring this. For
instance, we could either select an x that is already in e and make c(x ) true for the
selected x , or we could add an additional element x into e and then make c(x ) true for
that new element. When adding an additional element x into e we could either select
an x that is in the model, or create a new element x .
def
Similarly, a constraint of the form C = ∀ x ∈ e.c(x ) is false exactly when there
exists one or more x in e for which c(x ) does not hold. In order to make the constraint
true we need to check each x in e, and for any x for which c(x ) fails to hold we need
to fix the constraint C . This can be done for a given element x by either removing x
from e, or by making c(x ) true.
This general principle is applied to each constraint form:
1. consider what must happen for the constraint to be false;
2. then consider what is both necessary and sufficient to make the constraint true;
3. and finally consider all the possible ways in which the necessary and sufficient
change can be brought about.
In the final step, observe that the generation of plans for a sub-constraint is naturally
handled by a sub-goal: when generating plans to repair a constraint of the form ∃ x ∈
e.c(x ) the plans generated need to repair the sub-constraint c(x ) for a given x , and this
is done by a sub-goal to repair c(x ) which is achieved using plans that are generated
to repair c(x ).
Figures 10, 11 and 12 define the plan generation function Pt 15 which takes an OCL
constraint and generates a set of plans to repair the constraint. Figure 10 gives the definition of P t for the three propositions involving a single navigation over an association,
corresponding to making the input constraint true or false respectively. Assume that
E and x are two given model entities, and E .aend navigates the association by using
the opposite association end16 . The first case specifies that to make the constraint c
def
= E.aend = x true there are two approaches: the first plan, which applies if E is
not associated with any entity with respect to aend , is to simply connect E and x ; the
second plan, which applies if E is already associated with an entity y, is to disconnect
E and y, and then connect E and x . The disconnection is necessary since aend is a
navigation to a single entity.
Figure 11 gives generation rules for constraints on Boolean-valued set expressions.
The terms SE , SE 0 , SE 1, and SE 2 denote set expressions which can be a single set (e.g.
the set of agents in the system) or a derived set which is built from another set using a
def
def
navigation (e.g. S = self .agent) or set operation (e.g. S = S1→select(c)). The second
def
case (includesAll) specifies that to make the constraint c = SE→includesAll(SE’)
true we consider each element x in SE 0 − SE , and for each such element we either
15
Analogous definitions are needed for P f , these can be found in [40].
In OCL, a navigation expression may start with an object and then navigate to a connected
object by referencing the latter by the role name attached to its association end.
16
19
def
P t (c = E.aend = x) =
{ct : E.aend = null ← Connect E and x (w.r.t aend),
ct : E.aend = y ← Disconnect E and y (w.r.t aend); Connect E and x (w.r.t. aend)}
def
P t (c = E.aend1.aend2 = x) =
{ct : E.aend1 = y ← !(y.aend2 = x )t ,
ct : z.aend2 = x ← !(E .aend1 = z )t ,
ct : z ∈ Type(E.aend1) ∧ z.aend2 6= x ← !(E .aend1 = z )t ;!(z .aend2 = x )t ,
ct ← Create z : Type(E.aend1); !(E .aend1 = z )t ; !(z .aend2 = x )t }
def
P t (c = E1.aend1 = E2.aend2) =
{ct : E2.aend2 = x ← !(E 1.aend1 = x )t ,
ct : E1.aend1 = x ← !(E 2.aend2 = x )t ,
ct : x ∈ Type(E1.aend1) ∧ x 6= E1.aend1 ∧ x 6= E2.aend2
← !(E 1.aend1 = x )t ; !(E 2.aend2 = x )t ,
ct ← Create x : Type(E1.aend1); !(E 1.aend1 = x )t ; !(E 2.aend2 = x )t }
Fig. 10 Plan generation rules (ct ) for basic propositions involving navigations to a single
entity
remove it from SE 0 or add it to SE . Note that addition or deletion is an event, since,
in general, SE and SE 0 may themselves be derived sets.
def
def
P t (c = SE→includes(x)) =
{ct ← !+(x, SE)}
def
P t (c = SE→includesAll(SE’)) =
{ct ← for each x in (SE’ − SE) !ct0 (x),
ct0 (x) ← !−(x, SE’),
ct0 (x) ← !+(x, SE)}
def
P t (c = SE→excludes(x)) =
{ct ← !−(x, SE)}
def
P t (c = SE→excludesAll(SE’)) =
{ct ← for each x in SE ∩ SE’ !ct0 (x)
ct0 (x) ← !−(x, SE’),
ct0 (x) ← !−(x, SE)}
P t (c = SE→isEmpty()) =
{ct ← for each x in SE !−(x, SE)}
def
P t (c = SE→notEmpty()) =
{ct : x ∈ Type(SE) ← !+(x, SE),
ct ← Create x : Type(SE) ; !+(x, SE)}
def
P t (c = SE→forAll(c1)) =
{ct ← for each x in SE if ¬ c1(x) then !ct0 (x),
ct0 (x) ← !−(x, SE),
ct0 (x) ← !c1t (x)}
def
P t (c = SE→exists(c1)) =
{ct : x ∈ SE ← !c1t (x),
ct : x ∈ Type(SE) ∧ x 6∈ SE ← !+(x, SE) ; !c1t (x),
ct ← Create x : Type(SE) ; !+(x, SE) ; !c1t (x),
c1t (x) : c1(x) ∨ c ← true }
Fig. 11 Plan generation rules (ct ) for basic propositions involving set operations returning
Boolean values
We have defined rules for basic propositions. We now consider repair plans for
propositions that are related to sets and contain actions which add or remove entities
from a set. Adding or removing an element to a basic set is a primitive model operation.
However, sets can be derived from other sets by means of navigation or set operations
supported in OCL such as select(), reject(), etc. Although adding an element to a basic
single set (or removing it from the set) is a primitive action, adding or deleting elements
from derived sets is not a primitive action. Instead, we model addition and deletion from
derived sets as events, which are handled by additional plans. For example, in order to
def
add the element x to SE = S→select(c) (previously denoted as the event +(x , SE )),
we need to ensure both that x is in S , and that c(x ) is true. The translation for this
20
group of rules is defined as functions Q+ and Q− , each of which takes a derived set and
returns a set of repair plans for adding an element to the set or removing an element
from it respectively. Some of the generation rules for addition and deletion involving
derived sets are shown in Figure17 12. For the full set of rules see [40].
def
def
Q+ (SE = E.aend) =
{+(x , SE ) ← Connect E and x (w.r.t aend)}
Q− (SE = E.aend) =
{−(x , SE ) ← Disconnect E and x (w.r.t aend)}
def
def
Q+ (SE =
S→select(c)) =
{ +(x , SE ) : x ∈ S ← !ct (x),
+(x , SE ) : x 6∈ S ∧ c(x) ← !+(x, S),
+(x , SE ) : x 6∈ S ∧ ¬ c(x) ← !+(x, S) ; !ct (x)}
def
Q+ (SE = S1→union(S2)) =
{ +(x , SE ) : x 6∈ S1 ← !+(x, S1),
+(x , SE ) : x 6∈ S2 ← !+(x, S2)}
def
Q+ (SE = Set(type)) =
{ +(x , SE ) ←Create x : type }
Q− (SE = S→select(c)) =
{ −(x , SE ) ← cf (x),
−(x , SE ) ← !−(x, S)}
def
Q− (SE = S1→union(S2)) =
{ −(x , SE ) : x ∈ S1 ∧ x ∈ S2 ← !−(x, S1) ; !-(x,
S2),
−(x , SE ) : x ∈ S1 ∧ x 6∈ S2 ← !−(x, S1),
−(x , SE ) : x 6∈ S1 ∧ x ∈ S2 ← !−(x, S2)}
def
Q− (SE = Set(type)) =
{ −(x , SE ) ← Delete x : type }
Fig. 12 Plan generation rules for basic propositions involving set addition and deletion
Having defined the plan generation function P for basic propositions (and other
cases, such as logical connectives, which are not shown in this paper), and the Q
function for dealing with addition to (and deletion from) derived sets, we now define
the R function which combines them in order to generate a complete set of plans for
repairing a constraint, including the plans needed to repair sub-constraints. The R
function takes an event (e.g. making a constraint true) and returns a repair plan set
(unlike function P t or P f which takes a constraint not an event). More specifically, let
R(ct ) and R(cf ) be the complete set of repair plans for the event of making constraint
c true and false respectively (note that ct and cf are events).
We now discuss what should be included in R(ct ) and R(cf ). Firstly, R(ct ) should
contain Pt (c) (which is the specific plans for the constraint form) and similarly R(cf )
contains Pf (c).
Secondly, our repair plans are structured in a hierarchical manner, i.e. a plan may
post certain events which are handled by further plans. For example, the form of some
def
repair plans for constraint c = SE→exists(c1) contain an event posted to repair the
sub-constraint c1. As a result, when we consider a complete set of repair plans for a
specific constraint, we need to take into account plans that handle events. Events in our
generated repair plans either fix a sub-constraint (ct and cf ) or add/delete an element
to/from a derived set (+(x , SE ) and −(x , SE )). We use event(Pt (c)) to denote the set
of events that are posted within all plans belonging in Pt (c), which may include events
of the form +(x , SE ) and −(x , SE ).
Furthermore, if a constraint has more than one sub-constraint then fixing one subconstraint might conflict with the plan that repairs the other sub-constraint. Therefore,
17 Function isSingle() tests if a given navigation results in a single entity. In addition,
E .aend1 = E 1 indicates that E .aend1 results in a single entity E 1. Meanwhile, E .aend1 = SE 1
indicates that E .aend1 leads to a set of entities SE 1.
21
in order to guarantee that the generated repair plans always correctly fix the constraint,
we include some top level plans. More specifically18 , R(ct ) includes {fixCt : c ← true,
fixCt : ¬ c ← !ct ; !fixCt }. Similarly, R(cf ) includes {fixCf : ¬ c ← true, fixCf : c
← !cf ; !fixCf }. The intuition is that in order to repair constraint c by making it true
(i.e. fixCt ), we first check whether the constraint is in fact violated. If it is not violated
then we do not need to fix it. If it is violated, then we fix it using the generated repair
plans (!ct ), and furthermore, after the repair plans have completed, we repost the event
fixCt in order to check that the repair plans have in fact fixed the constraint. The aim
of introducing these top-level plans is to ensure that should sub-constraints interfere
with each other, the overall plan will go on to repair the constraint. This mechanism
is not aimed to deal with interference between different top-level constraints, only
with interference between sub-constraints. It is worth noting that the cost calculation
mechanism (presented in the Section 5) is used both to deal with interference between
top-level constraints, and also to avoid choices of repair options that lead to loops (e.g.
given a constraint c with sub-constraints c1 and c2 , selecting a means for repairing c1
that breaks c2 , and then repairing c2 using a choice of means that breaks c1 ).
Thus the complete set of repair plans R(ct ) for making constraint c true is defined
as:
[
R(ct ) = Pt (c) ∪ {fixCt : c ← true, fixCt : ¬ c ←!ct ; !fixCt } ∪
R(ev )
ev ∈event(Pt (c))
and the complete set of repair plans R(cf ) for making constraint c false is:
[
R(cf ) = Pf (c) ∪ {fixCf : ¬ c ← true, fixCf : c ←!cf ; !fixCf } ∪
R(ev )
ev ∈event(Pf (c))
The above definition is applied to the events representing the need to make a
constraint true and false. With respect to the event representing addition of an entity
e to a derived set SE , denoted +(e, SE ), we have the following definition.
R(+(e, SE )) = Q+ (SE ) ∪
[
R(ev )
ev ∈event(Q+ (SE ))
Similarly for deletion from a derived set, we have the following definition:
R(−(e, SE )) = Q− (SE ) ∪
[
R(ev )
ev ∈event(Q− (SE ))
Finally, we define the function T which takes a constraint and returns the complete
repair plan set for making the constraint true or false:
T (c) = R(ct ) ∪ R(cf )
In the next section, we illustrate how the repair plan generation rules are used to
generate repair plans for a simple example.
18 In addition to including these additional plans, we also need to modify the generation rules
so that instead of using the event !ct to trigger the repair of constraint c, we instead use !fixCt .
For clarity we have not made this change in the description in this paper.
22
4.4 Example
Now let us consider a simple example of how repair plans are generated for a consistency
constraint in Prometheus (refer to the Prometheus metamodel presented in Figure 6).
The following constraint specifies that any agent that handles a percept should
contain at least one plan that is triggered by the percept. For instance, in the Weather
Alerting System, suppose that the “GUI” agent is meant to handle the “ChangeSubscription” percept, but has not been given plans to deal with the percept in question,
and this constraint will be violated.
Context Percept inv:
self.agent→forAll(a : Agent | a.plan→exists(pl : Plan | pl.percept→includes(self )))
def
We denote the above constraint as c(self ), i.e. c(self ) = self.agent→forAll(a :
Agent | a.plan→exists(pl : Plan | pl.percept→includes(self ))), and ct (self ) is the event
representing the need to make c(self ) true. We also define the following abbreviations:
def
c1(self, a) = a.plan→exists(pl : Plan | pl.percept→includes(self))
def
c2(self, pl) = pl.percept→includes(self)
Our repair plan generator produces the following repair plans for making constraint
c true, since it has the form SE→forAll(c).
ct (self) ← for each a in self.agent if ¬ c1t (self, a) then !ct0 (self, a)
ct0 (self, a) ← !−(a, self.agent)19
ct0 (self, a) ← !c1t (self, a)20
(P1)
(P2)
(P3)
The first plan checks that the sub-constraint c 0 holds for any agent that handles the
percept (self ). Note that this plan is unnecessarily complex for the Weather Alerting
System, since each percept is handled by exactly one agent (Figure 3). However, in
general, a percept may be handled by more than one agent type.
Plan P2 specifies a simple fix: if an agent that handles a percept doesn’t have a
plan to deal with the percept, then one could remedy this by not having the agent
handle the percept. However, in the case of the Weather Alerting System, this fix is
not effective. Since each percept should be handled by an agent, if we, for instance,
change the “GUI” agent to no longer handle the “ChangeSubscription” percept, then
we need to find another agent to handle the percept (and then ensure that that agent
has plans to deal with the percept).
Finally, Plan P3 fixes the sub-constraint c1, i.e. finds a way to ensure that the
agent in question (e.g. “GUI”) has a plan to deal with the percept in question (e.g.
“ChangeSubscription”). To fix constraint c1 we generate the following plans, since
the constraint is of the form SE → exists(c). In the rules of Figure 11 “Type(SE)”
denotes the type of SE’s elements, in this case SE (which is a.plan) contains plans,
and therefore in P5 the context condition requires that pl be an element of the set of
all plans, denoted Set(Plan).
c1t (self, a) : pl ∈ a.plan ← !c2t (self, pl)
c1t (self, a) : pl ∈ Set(Plan) ∧ pl 6∈ a.plan ← !+(pl, a.plan) ; !c2t (self, pl)
c1t (self, a) ← Create pl : Plan ; !+(pl, a.plan) ; !c2t (self, pl)
19
20
Post an event to remove a from the set self .agent.
Post an event to make c1(self , a) true.
(P4)
(P5)
(P6)
23
Similarly, for constraint c2 we generate the following plans.
c2t (self, pl) : c2(self, pl) ← true
c2t (self, pl) : ¬ c2(self,pl)← !+(self, pl.percept)
(P7)
(P8)
Finally, by applying functions Q+ and Q− we have the following plans for addition
and deletion from derived sets.
−(a, self.agent) ← Disconnect self and a (w.r.t. agent)
+(pl, a.plan) ← Connect a and pl (w.r.t. plan)
+(self, pl.percept) ← Connect pl and self (w.r.t. percept)
(P9)
(P10)
(P11)
Figure 13 represents the event-plan tree21 for fixing constraint c using the plans
produced by our repair plan generator. At the root of the tree is the event of making
the top constraint c true. The leaves of the tree are primitive repair actions. As can
be seen, the top event ct (self ) can trigger only one plan P 1. From its definition, this
plan can post multiple events ct0 (self ), as denoted with a * mark next to its node.
It is noted that events and plans in this tree are event types and plan types, not
instances. A single plan type can generate multiple plan instances at runtime. For
instance, plan P4 triggered by the event c1t (self) has a free variable “pl” in its context
conditions. At runtime, this variable is bound to different values, in this case it is bound
to any entity contained in the set a.plan. For each value it is bound to, a corresponding
plan instance is generated at runtime.
By examining the event-plan tree, one is able to identify options to repair the top
constraint. For instance, the event-plan tree in Figure 13 gives the following repair
options for fixing constraint c(self ), which are informally expressed as below:
1. For each agent handling the given percept self , if the agent does not contain at
least one plan that is triggered by the percept, we do one of the following:
(a) Remove this agent from the percept, i.e. this agent no longer handles the percept
(plans P2 and P9). As noted, this option is only really practical if the percept is
handled by more than one agent, which isn’t the case in the Weather Alerting
System.
(b) Pick out one of the plans belonging to the agent, and make it handle the percept
(P3, P4, P8 and P11). For example, if the “TAFManager” — which handles
the “TAF” percept — was lacking a plan for dealing with the “TAF” percept,
then we could select one of its existing plans and make it handle the “TAF”
percept.
(c) Pick one existing plan, add it to the agent (P3, P5, P10), and if the plan
already handles the percept then do nothing (P7), else make the plan handle
the percept (P8 and P11). For example, if the “TAFManager” was lacking a
plan for dealing with the “TAF” percept, but there was an existing suitable
plan belonging to another agent (or perhaps the plan had been defined but not
assigned to any agent), then the plan could be assigned to the “TAFManager”
agent.
(d) Create a new plan, add it to the agent, and make it handle the percept (P3,
P6, P10, P8, and P11).
21 In the figure it is presented as a directed acyclic graph in order to avoid having to show a
common sub-tree repeatedly.
24
ct(self)
P1
Key
Event
*
c’t(self)
Action
Plan
P2
P3
-(a, self.agent)
c1t(self, a)
P9
P5
Disconnect
self and a
(w.r.t. agent)
+(pl, a.plan)
P10
Connect a
and pl (w.r.t.
plan)
P4
P6
c2t(self)
P7
P8
+(self, pl.percept)
P11
+(pl, a.plan)
Create pl :
Plan
P10
Connect a
and pl (w.r.t.
plan)
Connect pl and self
(w.r.t. percept)
Fig. 13 An event-plan tree for ct (self)
Recall that in Figure 7 (in Section 3) we gave three repair plans for repairing this
same constraint, and noted that these plans were not complete. The plans derived
in this section (for the same constraint) are complete with respect to the constraint
def
c(self ) = self.agent→forAll(a : Agent | a.plan→exists(pl : Plan | pl.percept→includes(self ))).
Compared with the repair options generated by the repair plan types in Figure 7, the
plans in this section also include a repair plan which un-assigns the agent in question
from a percept that it does not have a plan for (P2).
An observant reader might note that the repair plans generated appear to omit
certain options for repairing the constraint, such as deleting the agent or the percept.
Deleting the agent is not considered because it is not required: the repair plans generated include the repair option of removing the link between the percept and the agent
(P2), and this is sufficient: there is no need to also consider deleting the agent.
Deleting the percept is not considered because the constraint c(self ) takes the
existence of the percept as being assumed. In other words, the plan generation process
is complete with respect to the constraint c(self ), but the choice of constraint assumes
that the percept (self ) is not subject to deletion. Specifically, we deal with a constraint
of the form (e.g.) “Context Percept inv: c(self )” by repairing the constraint c(self ),
i.e. we do not consider the set of all percepts as being subject to change. We argue that
when a design entity violates an invariant constraint on it, then deleting the entity is not
25
likely to be a desirable repair option. An alternative is to translate a constraint of the
form (e.g.) “Context Percept inv: c(self )” into Set(Percept)→forAll(self : Percept
| c(self )), which does consider the set of (in this case) Percepts as being subject to
change.
4.5 Correctness and completeness
As mentioned earlier, the most important criteria of our repair plan generator are
correctness and completeness. In this section we argue that the translation schema
proposed above possesses these two properties.
A repair plan type contains a mixture of actions and subgoals (alternatively called
events). Such subgoals are achieved by executing further plans. At run time, a repair
plan type is expanded into zero or more plan instances (e.g. plans P1 , P2 , ..., Pn ) by
solving the plan’s context condition. When each plan instance (e.g. plan P1 ) has all
its subgoals resolved, it is then expanded into some number of action sequences (e.g.
P1a , ..., P1m ) by expanding each subgoal into a particular choice of plan (and doing this
recursively for sub-subgoals). We now use the notion of the action sequence resulting
from a plan instance to define correctness.
Definition 1 (Correct sequence of repair actions) A sequence of actions S repairs constraint C in model M iff (a) C is violated in M ; and (b) performing S on M
yields a new model M 0 ; and (c) C holds in M 0 .
The above definition generalises to a set of constraints in the obvious way.
Based on the definition of a correct sequence of repair actions, we now define a
definition of a correct repair plan.
Definition 2 (Correct repair plan) A repair plan type P correctly fixes a violated
constraint C if and only if when P is instantiated and then resolved into possible action
sequences, it results only in correct sequences of actions for repairing C .
As mentioned in Section 3 (our change propagation framework) and discussed in
detail in Section 5, the selection of applicable repair plans is based on the notion of
costs. As a result, when repair plans are generated at compile time, we only focus on
plans that have no unnecessary steps in fixing a particular constraint. Such plans are
considered as minimal plans which are defined below. Similar to the way we define
correct repair plans, we define minimal repair plans through the concept of a minimal
sequence of actions.
Definition 3 (Minimal sequence of repair actions) A correct sequence of actions
S is minimal if removing actions from it always results in a sequence that no longer
repairs C in M .
Definition 4 (Minimal plan) A repair plan type P for fixing constraint C is said
to be a minimal plan if and only if when P is instantiated and resolved into possible
action sequences, it results only in minimal sequences of actions for repairing C . Now, based on the definitions of repair actions in Section 4.1, an important observation is that the preconditions of these primitive actions are quite weak. This allows
us to arbitrarily reorder a sequence of actions subject to the following conditions:
26
1. Creation of entities must remain before addition of relationships (i.e. connecting)
between the entities;
2. If the sequence of actions has redundant pairs – an action that undoes the effects
of an earlier action – then the pair cannot be swapped, but it can be simplified by
deleting both of the two actions. For example, adding a relationship followed by
deleting it can be replaced by simply doing nothing. Condition (2) is not needed if
we assume that the sequence of actions being reordered is non-redundant, i.e. does
not contain any redundant pairs.
Lemma 1 (Action sequence reordering) A non-redundant sequence of actions
S can be arbitrarily reordered, so long as creation of entities precedes relating these
entities, without affecting the overall effect of S .
The permutation of actions within a particular action sequence may yield a number
of different action sequences which, according to lemma 1, have the same effect (as
long as they satisfy condition 1 mentioned earlier). For such action sequences, we are
interested in their representative which is defined as follows.
Definition 5 (Representative permutation of action sequences) A sequence of
actions S is said to be a representative permutation of all action sequences that are
derived by arbitrarily reordering it, subject to the constraint that creation of entities
precedes operations on these entities.
Given these definitions, a complete set of repair plans is defined below.
Definition 6 (Complete set of repair plans) A set of repair plan types R(Ct ) for
a constraint C is said to be complete if and only if, for any minimal (and correct)
sequence of actions S for repairing C , a representative permutation of S can be obtained
by instantiating and then resolving a plan in R(Ct ).
Based on the above definitions, the correctness and completeness of our translation
scheme for making a constraint true (i.e. R(Ct )) are expressed by the following theorem.
Note that a similar theorem can also be derived for the translation scheme for making
a constraint false (i.e. R(Cf )).
Theorem 1 For any given OCL constraint C which is satisfiable, the set of repair
plans R(Ct ) produced by the repair plan generator is correct and complete. That is, it
generates22 a representative permutation for each correct and minimal action sequence,
and does not generate any incorrect action sequences. Furthermore, all generated action
sequences are minimal.
Proof: See appendix A of [40]
A corollary is that, as long as a constraint C is satisfiable, the generated plans will
result in some way of repairing the constraint, i.e. the (sub-)goal to repair C cannot
fail.
22 In this context, plan generation means instantiation of plans followed by resolving subgoal
choices.
27
5 Plan Selection
There can be multiple applicable repair plans for resolving a given inconsistency. For
instance, an inconsistency concerning a naming mismatch between a message in a
sequence diagram and the operations in the message’s receiver class can be resolved
in different ways: either changing the message’s name or changing the name of one
of the operations. Choosing between those different possible repair plans can depend
on various factors. For example, assume that the inconsistency is caused because the
designer has renamed the message so that its name matches with a corresponding
state transition. In this case, renaming an operation is possibly more preferable from
the designer’s perspective.
The above example has shown that the cause of inconsistency can play a role in
determining which repair plans should be chosen. Other elements that can influence
repair plan selection include user dependent factors such as the designer’s style and
preferences. Formulating all of those factors is not generally feasible [42], thus a completely automated mechanism for selecting repair options is not appropriate. On the
other hand, it is not desirable for our change propagation mechanism to just give the
designer all of the possible options. We therefore develop a semi-automated mechanism
for repair option selection: options that are considered infeasible and costly (explained
below) are automatically filtered out, and the user is presented with a set of the cheapest repair options for selection. We assume that the quality of a repair plan does
correlate with its cost, i.e. that cheaper plans are higher quality. We argue that this is
a reasonable assumption: given two repair plans that both repair the relevant violated
constraints, it would appear sensible to prefer the repair plan that makes fewer changes
to the model.
We view inconsistency resolution in the context of change propagation. It means
that we have to consider the side-effects of a repair plan on other constraints. They
can be negative effects, i.e. breaking a constraint, or positive effects, i.e. resolving a
violated constraint. As a result, the selection of repair plans is also dependent on their
side-effects. More specifically, side-effects can be measurable in terms of costs and can
be included in the cost of a repair plan. Negative side-effects increase the cost of a repair
plan whilst positive side-effects reduce its cost. By modelling side-effects as costs, we
can also effectively eliminate infeasible cyclic repair plans because such plans would
result in an infinite cost.
Our approach to the above issues relating to plan selection is to define a suitable
notion of repair plan cost. Generally, the cost of a repair plan indicates the number of
repair actions that are needed to fix a given constraint violation. In the next section,
we discuss the cost definition of repair plans in more detail.
5.1 Cost definition
In this section, we give equations that define the cost of fixing a given constraint using
repair plans23 . In computing the cost of a given repair plan we count the number
of changes (i.e. creation, deletion, connection, disconnection, or modification of model
elements) that are made to the model. This notion is analogous to the edit distance on
23 In this section, repair plan instances are referred to as repair plans, and constraint instances
are referred to as constraints.
28
strings, which measures the “distance” between two strings by counting the number
of edit operations required to transform one string into the other [43]. By defining the
allowable edit operations different metrics can be derived. For example, the Hamming
distance is an edit distance with the only allowable edit operation being substitution;
whereas the Levenshtein distance allows substitution, deletion and addition. In our
work we are considering edit operations on models, rather than on strings, but the
basic concept is the same. For example, if repair plan P1 involves 5 connections and
repair plan P2 involves 3 connections then we view P2 as being cheaper. But what if
repair plan P1 involves 2 connections and 2 modifications? Is its cost less than that of
plan P2 (which involves 3 connections)? One approach is to define the cost as a tuple
which counts different operations separately, and does not compare different operations.
A consequence of this approach is that the cost is no longer a single number, but a tuple
of numbers. More importantly, not all costs are comparable, so given a set of costs,
there is no longer a unique maximal cost. An alternative approach is to simply define an
“exchange rate” between different edit operation types. For instance, we could say that
for comparison purposes a creation and a connection should be treated as having the
same cost. This approach is the one that we adopt in our work. Thus we assume that
each primitive action type is assigned an exchange rate (its “basic cost”), for instance
creation may have an assigned cost of 5 and connection a cost of 3. These numbers do
not correspond to any real cost, and are simply used to compare different action types.
It is worth noting that the selection of the specific “exchange rate” values is somewhat
arbitrary, and thus we allow the user to specify them. It is also worth noting that in
traditional edit distance metrics on strings, all edit operations are given the same cost,
i.e. all are implicitly assumed to have an exchange rate of 1. We return to the question
of basic costs and how they should be assigned in Section 6.6.
Let us now define some preliminary concepts and terminology.
Definition 7 (Action cost) The cost of an action is defined by a function cost that
maps an action to a natural number. cost(A) is the user-defined basic cost associated
with the repair action type A (i.e. creation, deletion, connection, disconnection, and
modification).
This provides a simple mechanism for the user to adjust the change propagation
process. For example, if the user wishes to bias the change propagation process towards
adding more information then he/she may assign lower costs to actions that create new
entities or add entities, and higher costs to actions that delete entities. Another example
is the case where the user wants a particular type of artefact (e.g. class diagrams) to
take precedence over others (e.g. statechart diagrams). He/she can assign higher costs
to actions that change class diagrams and lower costs to actions that modify statechart
diagrams (shown in Figure 5 as “basic cost values”).
A constraint that does not hold with regard to a model is said to be violated, and
can be fixed by executing a repair plan. Based on the abstract syntax of repair plans,
we can view a repair plan instance as comprising primitive repair actions and subgoals
(also referred to as events).
Definition 8 (Primitive actions of a repair plan) The primitive actions of a repair plan are defined by a function A that maps a repair plan to a set of primitive
repair actions (i.e. creation, deletion, connection, disconnection, and modification). In
other words, A(P ) returns the set of the repair actions in plan P .
29
The cumulative costs of those primitive repair actions in a repair plan form the
plan’s basic cost. Calculating the basic costs of a repair plan is straightforward and is
done by summing the costs of all actions in the plan.
Definition 9 (Basic cost) The basic cost of a plan is defined by a function basicCost
that maps a plan to a natural number. The basic cost of a plan is the sum of the costs
of all actions of the plan.
Aside from primitive actions, a repair plan also contains subgoals/events which
in most cases correspond to resolving violated sub-constraints or adding/removing
elements to/from derived sets.
Definition 10 (Subgoals of a repair plan) The subgoals of a repair plan are defined by a function G that maps a repair plan to a set of subgoals. In other words, G(P )
returns the set of subgoals of plan P .
Similar to primitive actions, the costs of subgoals in a repair plan form the subgoal
cost of the plan. Calculating a plan’s subgoal cost is more complicated than its basic
cost because one needs to work out the cost of each subgoal in the plan. We will define
the cost of a goal later.
Definition 11 (Subgoal cost) The subgoal cost of a plan is defined by a function
subGoalCost that maps a plan to a natural number. The subgoal cost of a plan is the
sum of the costs of all subgoals in the plan.
We now define the main cost of a plan in terms of the costs of its basic actions
(basicCost) and the cost of its subgoals (subGoalCost). It is noted that the main cost
is different from the cost of a repair plan which also takes into account the costs of
fixing other violated constraints – which we will discuss next.
Definition 12 (Main cost) The main cost of a plan is defined by a function mainCost that maps a plan to a natural number. The main cost of a plan is the sum of the
plan’s basic cost and its subgoal cost.
As previously discussed, repairing a constraint often has side-effects. Such sideeffects include both the introduction of new constraint violations and the repair of
existing violations. Therefore, it is very important in the context of change propagation
to consider a group of constraints together when fixing a single constraint. We name this
group of constraints the repair scope. When a repair plan completes fixing a constraint,
the other constraints in the repair scope are checked and any violated constraints are
then repaired. A global repair scope involves all constraints whilst a local one contains
constraints related to certain entities in the model.
Definition 13 (Repair scope of a repair plan) The repair scope of a repair plan
is defined by a function S that maps a repair plan to a set of constraints. In other
words, S(P ) returns a set of constraints, which is called the repair scope, of plan P . Our cost calculation takes into account the side-effects of a plan with respect to
its repair scope by having a concept of a (repair) scope cost for each repair plan. If
a repair plan positively contributes towards its repair scope, e.g. does not break any
constraint and/or fixes some other violated constraints, then its scope cost will be low.
On the other hand, if a repair plan has a negative impact on its repair scope, e.g.
breaks some other constraints, then its scope cost will be high. Hence, the scope cost
30
of a repair plan is measured through the number of violated constraints and the costs
of fixing them after the plan’s execution. In order to calculate the scope cost of a repair
plan, one needs to simulate execution of the plan and work out which constraints in
the plan’s repair scope become violated and which constraints are no longer violated.
Definition 14 ((Repair) scope cost) The (repair) scope cost of a plan is defined
by a function scopeCost that maps a plan to a natural number. The scope cost is the
cost of repairing all (violated) constraints in the plan’s repair scope after the execution
of the plan24 .
The (repair) scope cost is how our cost calculations account for interactions (either
positive, i.e. synergistic, or negative). When we consider the cost of repairing a given
constraint, the (repair) scope cost captures the effects of interactions with other constraints. We now briefly describe an example which illustrates how this works. Suppose
that we have two constraints: C 1 and C 2, and suppose that C 1 can be repaired in
three different ways (plans PA, PB , and PC ) and that constraint C 2 can be repaired in
two different ways (PD and PE ). Figure 14 shows the constraints, their repair options,
and for each repair option its main cost (given in brackets after the plan name), i.e.
the direct cost of executing the plan (including its subgoals). Note that the costs are
specific to the model in which the constraint is being repaired. If we consider a different
model then repairing a constraint may have a different set of costs. The costs given
in the figure correspond to C 1 in a starting model (not shown), i.e. that we repair
C 1 first. For the purposes of this example we assume that the model resulting from
executing PA and PC yield the same main costs for repairing constraint C 2, and these
are the costs shown in the figure. Finally, we assume that PB repairs C 1 in such a way
that the constraint C 2 is also repaired at the same time; and that PE fixes C 2 in a
such a way that the constraint C 1 is also broken at the same time.
Let us now consider how the cost calculation proceeds, and how interactions between the constraints are captured by the repair scope cost. We consider constraint C 1
first (see Figure 15). In order to compute the overall cost of PA, PB , and PC we need
to sum their main costs (shown in Figure 14) and their repair scope costs.
For both PA and PC the repair scope cost is the cost of repairing C 2 in the resulting
model. However, since PB repairs C 2, its repair scope cost is 0: once PB has finished
both constraints are repaired, and no more actions are required (hence cost=0). Thus,
for PB the total cost is 3 (main cost of 3 plus repair scope cost of 0). For PA and PC
we need to know the cost of repairing constraint C 2.
The cost of repairing C 2 is based on the costs of PD and PE . For PD the cost
is 3 (main cost of 3 plus repair scope cost of 0). For PE the main cost is 2, but since
the model resulting from executing PE is such that C 1 is violated, we have a non-zero
repair scope cost, because we need to repair C 1 (again). Note that the costs (and
indeed the options available) to repair C 1 in this situation may be different to those
shown in Figure 14. However, if we assume that the cost to repair C 1 in this situation
is at least 2, then the overall cost of PE is at least 4, and thus PD will be preferred. In
other words, the cost calculation has taken into account the additional cost incurred
by PE breaking C 1.
Considering the overall cost of repairing C 1 and C 2, we have that PB is in this
case the preferred method, because its overall cost is the lowest (3, compared with 5
24 Since we will define cost(C ) = 0 if the constraint C is not violated we simply sum over
the cost of all constraints in S(P ).
31
C2
C1
PD (3)
PA (4)
PB (3)
PE (2)
PC (2)
Fig. 14 Example of Repair Scope Cost. Main costs are given in brackets after each plan name.
C1
PA
PB
PC
C2
(3)
C2
PD
PE
PD
PE
(4+3=7)
C1
(2+3=5)
C1
...
...
Fig. 15 Example of Repair Scope Cost dealing with Interactions
for PC ). In other words, the cost calculation has taken into account the benefit of also
repairing C 2 as a side-effect.
We now define the cost of a plan in terms of its main cost and repair scope cost.
It is noted that a repair scope is only meaningful to top-level plans, i.e. those that fix
the top-level constraints. As a result, when calculating the cost of a plan we need to
know if it is a top-level plan or not.
Definition 15 (Plan cost) The cost of a plan is defined by a function that maps a
plan to a natural number. If a plan is to fix a top level constraint, the plan’s cost is
equal to the sum of its main cost and its repair scope cost. If a plan is to achieve a
subgoal, the plan’s cost is equal to its main cost.
We now provide equations that define the cost of constraints and goals/events. A
goal/event can be either fixing a top-level constraint, a sub-constraint or adding/removing
elements to/from derived sets. There are usually several applicable plan instances to
repair a constraint violation or to achieve a goal.
Definition 16 (Applicable plans) The applicable repair plans of a constraint is defined by the function CP that maps a constraint to a set of plan instances that can be
used to fix the constraint. The applicable plans for achieving a goal is defined by the
function GP that maps a goal to a set of plan instances that can be used to accomplish
a goal.
The best plan, which is selected for execution, is the one with minimum cost. Hence
the cost of repairing a constraint is the cost of the cheapest repair plan instance.
Definition 17 (Constraint cost) The cost of a constraint25 is defined by a function
cost that maps a constraint to a natural number. The cost of fixing constraint C is
equal to the cost of the best applicable repair plan instance with regard to C . If there
are no applicable repair plans, the cost of C is undetermined. The cost of fixing an
unviolated constraint is 0.
25
The cost of a constraint (instance) is relative to a particular state of the model.
32
X
basicCost(P) =
cost(A)
A∈A(P)
X
subGoalCost(P) =
cost(G)
G∈G (P)
mainCost(P) = basicCost(P) + subGoalCost(P)
X
scopeCost(P) =
cost(C) , if P fixes a top level constraint
C ∈S (P)
=0
, otherwise
cost(P) = mainCost(P) + scopeCost(P)
cost(C) = 0
, if C is not violated
= min P∈CP(C ) cost(P) , otherwise
cost(G) = min P∈GP(C ) cost(P)
Fig. 16 Cost Definitions
Similarly, we define the cost of a plan achieving a goal. It is noted that the cost of
a goal of fixing a constraint is equal to the cost of the constraint.
Definition 18 (Goal cost) The cost of a goal is defined by a function cost that maps
a goal to a natural number. The cost of achieving a goal is the cost of the cheapest
available repair plan.
We have provided equations to define the cost of plans, constraints, and goals (see
Figure 16). In order to illustrate how those definitions are applied in reality, we now
calculate the costs for the following example.
5.2 Example
We now consider a simple example. Consider the following constraint, which is a subconstraint of the constraint considered in Section 4.4:
– For any percept p and agent a that handles the percept, at least one of the plans
of agent a is able to deal with percept p. Formally, this is:
def
c1(p, a) = a.plan→exists(pl : Plan | pl.percept→includes(p))
def
We define constraint c2(p, pl) = pl.percept→includes(p), which is a sub-constraint
def
of c1, i.e. constraint c1 can be written as c1(p, a) = a.plan→exists(pl : Plan | c2(p,
pl)).
The plans generated to repair this constraint are plans P4-P11 (given in Section 4.4).
Let us assume a design model (Figure 17) in which there is only a single agent
type, “AWSManager” (abbreviated “AMan”), which handles the “AWS” percept, and
which has a single plan “HandleAWS” (abbreviated “HandA”). We also assume that
the “HandA” plan is the only plan in the model, and that it is not assigned to handle
the “AWS” percept. This simple model can be seen as an initial version of the Weather
33
System Overview
AMan Agent
AMan = AWSManager
HandA = HandleAWS
Fig. 17 Example Simple Design Model
Alerting System, and is used to make it easy to demonstrate the pruning that is
discussed later.
Given this model, the constraint c1 is violated for percept “AWS” and agent
“AMan”. Plans P4-P11 are used to generate the following repair plain instances for
fixing this constraint. Note that in this model AM an.plan = {HandA} and that
Set(Plan) = {HandA}. As a result, the second plan type to handle the event (P5)
does not yield any instances because its context condition has no solutions. We thus
have the following plan instance (the PIn numbers the plan instance, and the Pn
indicates which plan type corresponds to the instance). We use “NP” to abbreviate
“NewPlan”, the new plan created by PI2 (instantiated from P6).
c1t (AWS, AMan) ← !c2t (AWS, HandA)
(PI1, P4)
c1t (AWS, AMan) ← Create NP : Plan ; !+(NP, AMan.plan) ; !c2t (AWS, NP) (PI2,
P6)
+(NP, AMan.plan) ← Connect AMan to NP (w.r.t. plan)
(PI3, P10)
c2t (AWS, HandA) ← !+(AWS, HandA.percept)
(PI4, P8)
c2t (AWS, NP) ← !+(AWS, NP.percept)
(PI5, P8)
+(AWS, HandA.percept) ← Connect HandA to AWS (w.r.t. percept)
(PI6, P11)
+(AWS, NP.percept) ← Connect NP to AWS (w.r.t. percept)
(PI7, P11)
Figure 1826 shows the goal-plan tree of repair plans for fixing constraint c1(p, a)
— with p = AWS and a = AMan — and how the costs are calculated. We assume a
basic cost of 1 for all action types used in this example.
There are two applicable repair plan instances for making constraint c1(p, a) true:
PI 1 and PI 2. Therefore, the cost of fixing this constraint is the minimum of cost(PI 1)
and cost(PI 2). The costs of plan PI 2 are calculated as:
cost(PI 2) = mainCost(PI 2) + scopeCost(PI 2)
Where,
mainCost(PI 2) = basicCost(PI 2) + subGoalCost(PI 2)
= cost(Creation) + cost(SG21 ) + cost(SG22 )
= cost(Creation) + cost(PI 3) + cost(PI 5)
= cost(Creation) + cost(Connection) + cost(SG51 )
= cost(Creation) + cost(Connection) + cost(PI 7)
26
The number placed next to each node is its corresponding cost.
34
Key
Event
min{cost(PI1), cost(PI2)}
c1t(AWS, AMan)
Plan
1 + scopeCost(PI1) PI1
Action
3 + scopeCost(PI2) PI2
1
1
c2t(AWS, HandA)
+(NP, AMan.plan)
Create NP
1
1
c2t(AWS, NP)
SG21
PI4
1
1
1
+(AWS, HandA.percept)
PI6
SG22
PI3
1
PI5 1
1
SG51
Connect AMan to
NP
+(AWS, NP.percept)
1
1
1
PI7
1
Connect HandA to AWS
Connect NP to AWS
Fig. 18 An example of cost calculation for constraint c1(AWS , AMan)
= cost(Creation) + cost(Connection) + cost(Connection)
=3
Therefore, the cost of plan PI 2 is: cost(PI 2) = 3 + scopeCost(PI 2)
Similarly, the cost of plan PI 1 is:
cost(PI 1) = mainCost(PI 1) + scopeCost(PI 1)
= 1 + scopeCost(PI 1)
Although the main cost of PI 1 is smaller than the main cost of PI 2, PI1 may not
be the plan chosen to fix c1. This is because its total cost, which includes the scope
cost, is not guaranteed to be less than the cost of plan PI 2. In other words, we need to
know the scope cost of each plan before making the decision of which plan to choose
for fixing constraint c1. For instance, if the repair scope contains only constraint c1,
then the scope costs of plans PI 1 and PI 2 are 0. Therefore, the cost of PI 1 is 1, which
is smaller than the cost of PI 2 (i.e. 3) and consequently PI 1 is chosen to repair c1.
However, consider the case where the repair scope contains another constraint c3
requiring that every agent needs to have at least two plans. In this case, plan PI 2 has a
scope cost of 0, since after its execution, none of the constraints in the repair scope are
violated (c3 is not violated since agent AMan now has two plans HandA and NP ). On
the other hand, the scope cost of plan PI 1 is not 0 but is equal to the cost of constraint
c3 (it is violated because agent AMan has only one plan HandA). By performing a
similar calculation, one can derive that the cost of constraint c3 is 2 (1 for creating a
35
new plan and 1 for connecting it with agent AMan). Therefore, the cost of plan PI 1
would be 3, which is equal to the cost of PI 2. As a result, with regard to this repair
scope either plan could be chosen to fix c1.
5.3 Properties of the cost definitions
Having defined the costs of repairing violated constraints, we now consider a number
of properties of these definitions.
Firstly, a repair plan type contains a mixture of actions and subgoals. Such subgoals
are achieved by performing further actions. Therefore, a repair plan instance, which has
all its subgoals resolved, is ultimately a sequence of actions that need to be performed
in order to fix a particular violated constraint. By viewing an expanded repair instance
in terms of the corresponding action sequence, we are able to ease our proof of the
lemma and theorem presented ahead.
Definition 1 (on page 25) defines a sequence of actions that is able to repair a given
constraint. We now define the cost of such an action sequence.
Definition 19 (Repair action sequence cost) The cost of a sequence of repair actions S is defined by a function cost that maps an action sequence to a natural number.
The cost of an action sequence is the sum of the costs of all actions of the sequence.
We have previously defined a minimal sequence of repair actions (refer to definition
3 on page 25). It is noted that minimality implies non-redundancy, i.e. all minimal
plans are non-redundant. In addition, the definition of minimality generalises to a set
of constraints in the obvious way. This definition gives us the following important
lemma and theorem (refer to [40] for proofs).
Lemma 2 Let M0 be a model in which the constraints Ci are violated. Let S be a
minimal sequence of actions for repairing all the constraints Ci in M0 . Then for a
given constraint, say (without loss of generality) C1 , there exists at least one sequence
of actions S 0 which is obtained by removing some number (possibly zero) of actions
from S such that S 0 repairs C1 in M0 and is minimal.
Theorem 2 Let M0 be a model where some number of constraints Ci are violated and
let S be a minimal (and hence non-redundant) sequence of actions that repairs the Ci
in M0 , yielding model MF :
S
M0
S1
MF
S’
M1
Then (by lemma 2) for any of the given constraints, say (without loss of generality)
C1 , there exists a minimal action sequence S1 that repairs C1 in M0 yielding M1 .
Furthermore, there then exists a minimal (and hence non-redundant) action sequence
S 0 that takes us from M1 to MF where cost(S ) = cost(S1 ) + cost(S 0 ).
36
By applying this theorem repeatedly, on C1 , then C2 , etc. we can show that in
order to repair a set of violated constraints we can consider a single constraint at a
time, in an arbitrary order, with no loss of generality. Furthermore, since the repair
plans are complete (theorem 1 on page 26), the action sequence S1 can be generated by
instantiating the repair plan set. This strong result is only possible because the actions
we consider have limited preconditions (refer to the formal definitions of repair actions
in Section 4.1 on page 14), allowing them to be reordered fairly freely. This result holds
for our case studies involving Prometheus and UML design models because we use the
same types of repair actions in both cases.
5.4 Cost calculation algorithms
In the previous section, we have defined how a repair plan’s cost is calculated. We now
give algorithms that calculate this cost. The algorithms operate with plan-goal trees,
where a goal has as children the plans that can be used to achieve it (denoted P(G)) and
a plan has as children its sub-goals (denoted G(P )). Each plan node stores the plan’s
basic cost (basicCost, initially the basic cost of the plan), other costs (dynamicCost,
initially 0), a Boolean value indicating whether the node is a leaf (isLeaf, initially false),
and a queue of its sub-goals (subGoalQueue, initially containing the plan’s sub-goals).
Each goal node stores a list of best (i.e. least cost) plan(s) (bestPlans, initially empty)
that achieve the goal.
P
G
G’
P1
P
P
G
G
P1
P2
P2
G1
G2
G1 G’ G2 G’
P3
P4
P3
P4
P1
P2
G1
G2
P3
P4
G’
G’
Fig. 19 Tree Transformation
Before we present the algorithm, we discuss a tree transformation that the algorithm uses. When considering the alternative ways of dealing with a given (sub)goal
the algorithm considers the available plans and selects the cheapest. In doing so, it
needs to consider the future: what will happen after the goal is handled. We do this by
transforming the tree so that the “future” is pushed down into the tree beneath the
current goal. Specifically, when we consider a goal that has a future (i.e. a parent plan
with non-empty sub-goals) we copy the sub-goals of the parent plan to the sub-goals
of the children plans. As seen in Figure 19, when considering two different plans P1
and P2 that handle goal G, we need to take into account the effects of each plan on
37
achieving goal G 0 . As a result, the tree is transformed so that G 0 is pushed down to
become the last subgoal of plans P1 and P2 . Similarly, for the same reasons, goal G 0
continues being pushed down to be a subgoal of plans P3 , P4 , and so on. It is noted
that this technique is not generally applicable since a plan (e.g. P1 ) may not have information accessible to its parent plan (e.g. P ). In our case, however, the design model
is the only information needed and is made globally accessible to all plans.
5.4.1 Initial algorithms
The algorithm presented in Figure 20 computes the cost of a plan according to the
equations in the previous section. Since we assume that basicCost is already computed
(by simply summing the costs of primitive actions in a plan), the algorithm only needs
to work out the plan’s subgoal costs and repair scope costs (see definition 15). These
costs are stored in dynamicCost which is initially set to 0, and is progressively incremented with the costs of sub-goals and of violated constraints in the repair scope.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
function cost(P)
P .isLeaf ← true
while P .subGoalQueue is NOT empty do
dequeue subGoal from P .subGoalQueue
if the constraint associated with subGoal is violated then
P .isLeaf ← false
P .dynamicCost ← P .dynamicCost + cost(subGoal, P )
end if
end while
if P .isLeaf = true then
local violatedSubGoals ← get-scope-violated-constraints()
if violatedSubGoals is NOT empty then
get a random violatedSubGoal from violatedSubGoals
enqueue violatedSubGoal into P .subGoalQueue
return cost(P )
end if
end if
return P .dynamicCost + P .basicCost
Fig. 20 Calculating Plan Node Cost (No Pruning)
The algorithm selects each sub-goal in turn (lines 2 and 3) and adds the cost of
any violated constraints onto the dynamic cost (line 6). If the plan node has children
(i.e. violated constraints27 ) then we are done, since the scope cost will be calculated
in those children. On the other hand, if this plan node has no children (isLeaf = true,
line 9) then we check for violated constraints in the repair scope (lines 10 and 11), and
if there are any, we select one of the violated constraints (line 12), add it to the queue
(line 13), and recursively call cost(P ) to compute its cost (line 14).
The algorithm in Figure 21 calculates the cost of a goal node (see definition 18) by
considering the possible plans and looking for the cheapest one. We first retrieve a list
of applicable plans for the goal (line 2). We then iterate through the list of plans (line
4) and calculate the cost for each of them (line 9). When a plan that is cheaper than
27 Note that when we encounter a violated constraint we note that the plan node is not a
leaf (line 5).
38
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
function cost(G, ParentPlan)
local bestCost ← +∞
local planList ← get-repair-plans(G)
G.bestPlans ← empty
for each plan P in planList do
execute plan P (in simulation)
if ParentPlan is not null then
copy all ParentPlan.subgoals to the end of P .subgoals
end if
local c ← cost(P)
if c < bestCost then
bestCost ← c
clear G.bestPlans
add P to G.bestPlans
else if c = bestCost then
add P to G.bestPlans
end if
unexecute plan P (in simulation)
end for
if ParentPlan is not null then
ParentPlan.subgoals ← empty
and if
return bestCost
Fig. 21 Calculating Goal Node Cost (No Pruning)
the previous best is found, the previous best plan(s) are replaced with the new plan
(lines 10-13). When a plan is found that is as good as the current best, it is added to
the list of best plan(s) (lines 14-15).
The algorithm uses look-ahead and simulates the application of the plans. Line 5
executes the plan currently being considered by (a) updating the model with the effects
of the plan’s actions, and (b) adding the plan’s sub-goals to the tree. In order to be
able to consider alternative plans we need to undo the effects of the plan’s execution
on the model, and this is done by line 17. This is implemented by logging changes to
the model, allowing these changes to be rolled back.
Lines 6-8 and 19-21 implement the tree transformation discussed earlier: the subgoals of the parent plan (excluding the current sub-goal) are added to the end of the
sub-goals of each plan P (lines 6-8). Once this has been done for all plans, we remove
the sub-goals from the parent (lines 19-21).
5.4.2 Advanced algorithms with pruning capabilities
The algorithms given in Figures 20 and 21 implement the definitions given in Section
5.1, but they search the whole goal-plan tree. This is inefficient, and may lead to
non-termination, since the tree may be infinite. We therefore modify the algorithms
by adding loop checking, and a form of pruning. We add to each goal/plan node two
values28 : β (initially +∞) - the least cost of fixing all constraints in the repair scope,
and σ (initially 0) - the (accumulative) cost of everything above the current node. In
Figures 22 and 23 lines that are new (relative to Figures 20 and 21) are marked with
“*”.
28 We use “β” since we do the β part of a classical α − β pruning. We do not do the α part
because we have a min-sum tree, rather than a min-max tree.
39
function cost(P)
1
P .isLeaf ← true
2
while P .subGoalQueue is NOT empty do
3
dequeue subGoal from P .subGoalQueue
*4
if P .β = +∞ and subGoal is in history then
*5
clear history
*6
return +∞
*7
end if
8
if the constraint associated with subGoal is violated then
9
P .isLeaf ← false
*10
local threshold = P .σ + lowerBoundCost(subGoal) + P .basicCost + P .dynamicCost
*11
if threshold > P .β then
*12
return threshold
*13
end if
*14
subGoal.σ ← P .σ + P .dynamicCost + P .basicCost
*15
subGoal.β ← P .β
16
P .dynamicCost ← P .dynamicCost + cost(subGoal, P )
17
end if
18
end while
19
if P .isLeaf = true then
20
violatedSubGoals ← get-scope-violated-constraints()
21
if violatedSubGoals is NOT empty then
22
get a random violatedSubGoal from violatedSubGoals
23
enqueue violatedSubGoal into P .subGoalQueue
24
return cost(P )
25
end if
26
end if
*27 P .β ← min(P .β, P .σ + P .dynamicCost + P .basicCost)
28
return P .dynamicCost + P .basicCost
*1
*2
*3
*4
*5
*6
*7
*8
function lowerBoundCost(G)
local planList ← get-repair-plans(G)
local lowerBound ← +∞
for each plan P in planList do
if P .basicCost < lowerBound then
lowerBound ← P .basicCost
end if
end for
return lowerBound
Fig. 22 Calculating Plan Node Cost (Pruning)
Computing the cost of a plan is done by the algorithm in Figure 22. We use a pruning mechanism, where we establish a threshold in order to avoid exploring alternatives
that are more expensive than known solutions. The threshold is calculated (line 10 of
Figure 22) based on the current accumulative cost σ, the plan cost (P .basicCost and
P .dynamicCost) and the lower bound cost, which is an estimate of the minimum cost
of achieving a (sub-)goal (lines 1-8 in the bottom of Figure 22).
The algorithm in Figure 22 also includes loop detection (lines 4-7). It keeps track
of goals seen along a branch in a list named history (line 6-8 in Figure 23). If the same
goal is seen again, corresponding to the fact that a constraint has become violated and
is being fixed again, then we have a loop and we terminate with infinite cost. This
checking only needs to be done when β is at its initial value (+∞): if β has a finite
value, then an infinite branch will be pruned because its cost will (eventually) exceed
β (because all plans eventually do something, and hence have non-zero cost).
40
function cost(G, ParentPlan)
1
local bestCost ← +∞
2
local planList ← get-repair-plans(G)
3
G.bestPlans ← empty
*4
remove plans in planList that have basic cost greater than G.β
*5
sort plans in planList based on their basic action costs
*6
if G.β = +∞ then
*7
add G into history
*8
end if
9
for each plan P in planList do
*10
P .β ← G.β
*11
P .σ ← G.σ
12
execute plan P (in simulation)
13
if ParentPlan is not null then
14
copy all ParentPlan.subgoals to the end of P .subgoals
15
end if
16
c ← cost(P)
17
if c < bestCost then
18
bestCost ← c
19
clear G.bestPlans
20
add P to G.bestPlans
*21
G.β ← P .β
22
else if c = bestCost then
23
add P to G.bestPlans
24
end if
25
unexecute plan P (in simulation)
26
end for
27
if ParentPlan is not null then
28
ParentPlan.subgoals ← empty
29
end if
30
return bestCost
Fig. 23 Calculating Goal Node Cost (Pruning)
The two values β and σ are passed from each parent goal/plan node down to its
child plan/goal nodes (lines 14-15 in Figure 22 and lines 10-11 in 23). Line 14 in Figure
22 shows that σ is in fact an accumulative cost: we accumulate the cost of the current
node in σ. When a plan cost is resolved, the total cost so far (i.e. the cost of the plan
as well as σ, the cost of the path from the root of the tree to the current node) is
compared against the current β to see if it needs to be updated (line 27 in Figure 22).
If at any point the total cost for a plan (threshold ) exceeds β then we prune (lines
11-13 of Figure 22). We also prune in the (admittedly unlikely) case that a plan’s basic
cost by itself exceeds β (line 4 of Figure 23). Once a best plan for a goal is found, the
goal’s β is also updated with the plan’s β (line 21 of Figure 23). Line 5 of Figure 23
implements a heuristic that considers plans with cheaper basic cost first.
5.4.3 Example
We now give an example to illustrate how the algorithms work. We use the same
example as in Section 5.2 with the assumption that the repair scope contains only one
constraint, i.e. c1. Figure 24 shows a plan-goal tree with the values associated with
each node. Firstly, it can be seen that we apply the tree transformation to the subgoal
c2t (AWS , NP ) by moving it and all of its children to be a subgoal of plan PI 3.
41
β=1
σ=0
bestCost = 1
bestPlans = {PI1}
basicCost = 0
dynamicCost =1
β=1
bestCost = 1
PI1
c2t(AWS, HandA)
bestCost = 1
c1t(AWS, AMan) bestPlans = {PI1}
β=1
σ=0
basicCost = 1
dynamicCost = 0
1 Create NP
PI2
threshold = 2
+(NP, AMan.plan) lowerBoundCost = 1
Pruned
PI3
basicCost = 0
dynamicCost =1
PI4
1 Connect AMan to
c2t(AWS, NP)
NP
bestCost = 1
basicCost = 1
+(AWS, HandA.percept)
basicCost = 0
PI5
+(AWS, NP.percept)
basicCost = 1
dynamicCost =0
PI6
basicCost = 1
PI7
Connect HandA to AWS 1
1
Connect NP to AWS
Fig. 24 An example of cost calculation for constraint c1(AWS , AMan)
On the leftmost branch of the tree, values are propagated from the leaf to the root.
For example, the basicCost of plan PI 6 is 1 (because it has only one connection action)
and its dynamic cost is 0 (because it has no subgoals), resulting in its total cost being
1. Since subgoal +(AWS , HandA.percept) has only one plan PI 6, its bestCost is clearly
equal to the cost of the plan, i.e. 1. Similarly, after the branch starting with plan PI 1
is fully explored, the values at the root node c1t (AWS , AMan) are: β = 1, σ = 0,
bestCost = 1, and bestPlans = {PI 1}.
After that, the algorithm begins calculating the cost of nodes on the branch starting
with plan PI 2. The values of β (1) and σ (0) are propagated from the root node to
the plan node PI 2 (corresponding to lines 10 and 11 in Figure 23). Plan PI 2 then
executes (line 12) and its cost is calculated (line 16). This leads to line 10 of Figure 22
where the threshold variable is calculated. Note that the lowerBoundCost of subgoal
+(NP , AMan.plan) is 1 because the subgoal has only one plan, i.e. PI 3 that has a
basic cost of 1 (since it has only one connection action). As a result, the threshold has
the value of 2, which is greater than the value of β. Therefore, the tree is pruned here
(lines 11 and 12 of Figure 22) and the cost of plan PI 2 is returned with the value of
2. This cost is greater than the current best cost of the top goal c1t (AWS , AMan),
therefore the best cost is 1 and the best plan is PI 1.
42
5.5 Complexity analysis
The computational complexity of the algorithm depends on the cost of checking all
constraints in the repair scope, as well as the number of child nodes each (non leaf)
node has (N ), the depth of the plan-goal tree (D), and the size of the application design
model, i.e. the number of model elements29 (M ). Based on empirical evidence [44], we
assume the cost of checking a single constraint to be constant. As a result, checking all
constraints in the repair scope is basically proportional to the size of the repair scope
(E ). Furthermore, although we do not include the tree transformation in the analysis
(lines 6-8 of Figure 20), we note that the transformation may create duplicate nodes,
which consequently creates duplicated work.
For each plan node, we need to execute the plan, i.e. perform update actions on the
application design model (line 5 of Figure 21) and unexecute the plan, i.e. undo what
was previously done to the model (line 17 of Figure 21). As executing/unexecuting a
plan involve actions that affect only a small part of the application model, we assume
it to be constant, i.e. not depending on the size of the application model. Moreover,
for each plan node we also need to calculate the cost of all of the plan’s subgoals (lines
2-8 of Figure 20). As the maximum number of subgoals is N , the work to be done for
each plan node is O(N ). Finally, we may also need to perform constraint evaluation
to identify violated constraints in the repair scope (line 10 of Figure 20). As a result,
the work to be done for each plan node may also include evaluating constraints in
the repair scope, i.e. O(E ). Overall, the worst-case complexity of each plan node is,
therefore, O(N + E ).
For each goal node, we need to get a list of applicable repair plan instances (line 2
of Figure 21), calculate the cost of each of these plans (line 9 of Figure 21), work out
the cheapest plans, and store them with the goal node (line 10-16 of Figure 21). As can
be seen in Section 4, the repair plans that generate many instances are the ones which
have a context condition of the form x ∈ SE , where SE is set of existing entities in the
model. As a result, time taken to generate applicable repair instances from repair plan
types tends to be proportional to the number of model elements, i.e. M . The work to
calculate the cost of each plan is already considered above. Finally, the time taken in
sorting and storing cheapest repair plans is proportional to N log N . As a result, the
work to be done for each goal node is roughly O(N log N + M ).
Since the number of nodes is roughly O(N D ) this gives an overall computational
complexity of O(N D × (N log N + M + N + E )). Because N < N log N , the overall
computational complexity is O(N D ×(N log N +M +E )). In other words, the algorithms
are exponential in the number of violated constraints in a repair scope due to an
extensive look-ahead planning.
However, in the next section, we will discuss an evaluation to assess the algorithm’s
performance empirically. Our results show that in fact the algorithm is viable for the
weather alerting application, and that furthermore, there are good reasons to believe
that in fact the depth of the tree, D, does not grow as the model grows, and that
therefore the algorithm is polynomial in the size of the model M . In addition, although
pruning was not taken into account in the analysis, it was assessed empirically in next
section, and, as will be seen, it can make a significant difference to efficiency in some
cases.
29
Clearly D and N are influenced by M ; see Section 6.3.2.
43
6 Evaluation
Our evaluation needs to address the effectiveness, efficiency and scalability of our approach. This involves answering several key questions. Firstly, how well this approach
works in practice and, specifically, how useful is it likely to be to a practising software
designer who is maintaining and evolving a system? In addition, how well does it scale
to larger problems? Unfortunately, an evaluation to answer these questions raises a
number of challenges and questions such as: which methodology should be used? which
application(s) should be used? what changes to the system should be done? how do
we select primary changes to perform? and, what basic costs should we use? In this
section we discuss the nature of these issues, and present and justify our solutions to
each of them.
We have chosen to use the Prometheus [30] methodology for the design of agent
systems. Prometheus is a prominent agent-oriented methodology, and in addition to
local expertise, we had easy access to the source code of the Prometheus Design Tool
(PDT), allowing the Change Propagation Assistant (CPA) to be integrated with PDT.
We extended the Prometheus metamodel described in [45], and defined consistency
constraints by examining the well-formedness conditions of Prometheus models, the
coherence requirements between Prometheus diagrams, and good design practices proposed by [30].
Our choice of application was the Bureau of Meteorology’s multi-agent system
for weather alerting [29]. This application was chosen because the prototype system
developed by the Australian Bureau of Meteorology had been extended in a number
of ways, and these extensions gave us well-motivated and realistic change scenarios to
evaluate.
In order to ensure that we sufficiently cover different types of change, we developed a taxonomy of changes by adopting existing classifications. We then checked
how well a proposed collection of changes covered the different types of change using
the taxonomy. The focus of our change taxonomy resides in the application’s design
(i.e. the model) rather than other aspects of software maintenance and evolution (e.g.
management). We also want to cover changes that reflect different purposes of maintenance. As a result, we adopt as the primary classification of changes types Swanson’s
classical taxonomy, which classifies changes as being perfective (adapting to changing
user needs), adaptive (adapting to changing application environment), and corrective
(removing bugs); we also use the later-proposed category of preventative (improving
software maintainability). In addition, based on the classification proposed by Chapin
et al. [46] we introduce a second taxonomy where changes are categorised based on the
types of modification made to the behaviour of a software system. More specifically,
they can be: adding a new functionality, removing an existing functionality and modifying an existing functionality. For instance, the inclusion of volcanic ash warnings in
a weather alerting system can be classified as perfective maintenance and functionality
addition. By contrast, the removal of pressure alerts from the same system due to pressure data from the forecasters no longer being available can be regarded as adaptive
maintenance and functionality removal.
As a check of coverage, we classify changes according to this taxonomy. It is, however, noted that in practice there is not always a clear-cut distinction between different
types of maintenance. For instance, when adapting the software to a new environment
(adaptive maintenance), functionality may be added to take advantage of new facilities supported by the environment (perfective maintenance) or removing a functionality
44
may require modifying other existing functionalities that use the functionality to be
deleted. In addition, we do not look at other types of changes such as different forms of
refactoring [47, 48] in which a software system is changed in such a way that improves
its internal structure but does not alter its external behaviour.
Ideally, the evaluation would be done by giving the Change Propagation Assistant
(CPA) tool to a group of selected users, who would be asked to work with the tool to
implement requirement changes. We would then collect and analyse data such as their
feedback, the change outcomes, etc. to assess how much assistance the CPA tool provides to them. However, due to time and resource limits, we were not able to conduct
such a comprehensive user study evaluation. In order to overcome this obstacle, we approached the evaluation by defining an abstract user behaviour in maintaining/evolving
an existing design. More specifically, we defined a process of change propagation based
on the user’s behaviour. The key item of this process is a user’s change plan. We then
simulated a real user by performing steps in this plan and assessed the responses from
the CPA tool. Figure 25 shows our model of the change propagation process. In this
model, the designer is guided by a change request which can be fulfilled by making a
given change to the system’s design. We view such a change plan (denoted by D) as
being a sequence of steps, with each step being a primitive change to the model such as
adding or removing a link between entities, which transforms the existing design model
so that it meets the requirements of the change request. Although the designer does
not have a complete knowledge of the change plan D, he/she may know some parts of
it, especially an initial segment of D (denoted as P ) where he/she determines initial
entities and makes changes to them (i.e. performs a primary change). In addition, we
also assume that the designer has a general idea of the change propagation direction
and thus given a number of change options they are able to identify those which are
compatible with the change plan. We assume for evaluation that a specific change plan
D is given.
New Requirement,
Enhancement, or
Bug Fix
Determine
Initial Entities
to Change
No More
Changes
Change
Entities
No More
Changes
Invoke CPA
Change
Options
Select
Change
Options
Fig. 25 Our model of the Change Propagation Process
Figure 26 formally describes our change propagation process in more detail. Given
a change plan D, the designer selects an initial segment P of D and performs the
actions in P (step 2), updates D by removing the performed actions (step 3) and then
invokes the tool, which returns a set O of repair options (step 4). Each repair option
Ci is a sequence of actions. At this point (step 5) the user may select one of the Ci
(step 6) and apply it to the model (step 7), or they may decide that none of the Ci
are suitable.
45
1.
2.
3.
4.
5.
6.
7.
8.
9.
given a change plan D (sequence of actions)
select P v D and do the actions in P
update D (D := D − P )
invoke the tool yielding O = {C1 , . . . , Cn }
if ∃ Ci ∈ O where Ci is compatible with D then
select a compatible Ci ∈ O (if more than one)
do actions in Ci and update D (D := D − Ci )
end if
goto step 2 if D is not empty.
Fig. 26 A change propagation process
Deciding whether a Ci is suitable is done by comparing it with the designer’s plan:
Ci is compatible with D if all of the steps in Ci are in D (formally30 ∀ s ∈ Ci • s ∈ D). If
all the changes in D have been performed the process ends, otherwise the user continues
to perform more primary changes (step 9). We use D := D − P to denote removing
the actions in P from the sequence D, and we use P v D to denote that P is an initial
segment of D (formally ∃ X : P + X = D, where + is sequence concatenation).
We use the above interaction between the designer and the CPA to define our evaluation process and metrics which are described in detail in the next section. Although
this process helps simulate the behaviour of a real user, it contains several limitations.
The process is relatively abstract and does not cover other (potentially useful) details
such as reasons for a design decision. In addition, although the process is developed
based on existing change propagation models [10, 15, 49], it is not clear to what extent
these models have been validated by studying the practice of software maintenance
and evolution by developers. On the other hand, the process we use is more systematic
in assessing what the tool does for each possible primary change prefix P .
In terms of setting basic costs, connection (between two entities) is assigned a
cost of 1 and so is modification. Most of the changes are perfective and/or adding new
functionalities, which tends to result in new entities in the design model. As a result, we
set the cost of creation be 0 to encourage new entities to be created. Disconnection and
deletion are assigned a slightly higher cost (of 3) for most of the changes. The exception
is change 5 which involves disconnecting entities and we set the cost of disconnection
to be equal to the cost of connection (i.e. 1) for this case. We return to the issue of
selecting basic costs in Section 6.6.
6.1 Experiment process and metrics
As pointed out earlier, efficiency and effectiveness are the two important criteria that
we used to assess our approach. We dealt with the efficiency assessment by applying our approach in both real and artificial applications. In particular, we examined
the scalability of our approach by simulating designs with variable numbers of model
elements and constraints. With respect to the effectiveness evaluation, we used a “simulated user” who follows the change propagation process defined in Figure 26. More
specifically, our effectiveness evaluation process consists of the following steps:
30
Alternatively, if viewed as sets, Ci ⊆ D.
46
1. For each new requirement we develop a change plan D. Because the tool’s effectiveness is sensitive to the choice of change (i.e. D) it is very important to justify the
change plan D with strong evidence showing that it is a reasonable design change.
It is also noted that for a requirement there may be several alternative valid change
plans that meet the requirement. In these cases, we should consider all of them.
Furthermore, within the change plan D we mark “obvious” and “side-effect” actions separately. The differences between these two types of action are explained
below.
2. We then apply the process described earlier (see Figure 26), considering a partial
change P that expands by one step at a time. By doing this, we consider all possible
choices of primary change.
When the process terminates, we measure what proportion of the actions in the
change was done by the user, and what proportion was done by the CPA. More specifically, we count how many actions ended up being in Ci s along the way, compared with
the total number of actions in D so we calculate31 the metric M =| C | / | D | (where
C is the union of the Ci s). It is noted that the value of M depends on our choices for
P . Clearly, if P is the whole of D then there is nothing left for the tool to do, and M
will be 0. In some of the cases below we will see that there is a “tipping point”: until
enough of D is done the tool cannot help, but once enough is done, the tool performs
the remaining steps in D.
In order to further assess the usefulness of our CPA tool, we also distinguish “obvious” change actions from “side-effect” actions. Obvious actions are changes that are
part of the main design flow and/or are in the scope that the designer is currently
working in. For instance, if the designer creates a new agent, then his/her next steps
would be developing the agent’s internals, e.g. creating plans. In contrast, it is more
difficult to identify “side-effect” actions, changes that are needed in other parts of the
system that are neither part of the main design flow nor in the scope of the designer. As
a result, the designer tends to miss these secondary side-effect changes. In the previous
example, side-effects could be assigning the newly created agent with a role. It is clear
that the tool is more useful if it can pick out such side-effect actions. As a result, we
consider not only the percentage of actions in the change done by the CPA, but also
how many of them are side-effect actions. Therefore, we classify the repair actions into
two categories: “obvious” and “side-effect” change actions and count them separately.
In addition, since a repair option Ci returned by the CPA can contain “side-effect”
actions, we need, for evaluation purposes, to require that the change plan D also contains all of the consequent side effects. In this sense, D is not really the designer’s plan
per se, but rather than the designer’s plan plus consequent side effects.
In addition to measuring M , an important factor in the usefulness of the tool
concerns the repair options, O. Specifically, we are interested in how many of the Ci in
O are compatible with D, and in the size of O (since it is clearly better if the designer
is not being asked to select an option from a very large list). We thus, in addition to
M , also measure the number of options and how many of the options are compatible
with D.
47
Fig. 27 Weather Alerting System Design
6.2 Effectiveness Evaluation
The system being modified was the weather alerting application [29]. Recall that the
aim of this system is to alert forecasters to a range of situations, including when a
31
Note that, if viewed as sets, D = Ci ∪ P .
48
discrepancy develops between a previously-issued forecast and the subsequently observed weather conditions. The initial design of the system consists of five agent
types: “Alerter”, “TAFManager”, “AWSManager”, “Discrepancy” and “GUI”. Figure
27 shows the system overview diagram for the application, as well as agent overview
diagrams for the “Discrepancy” and “GUI” agent types. The “TAFManager” processes
the “TAF” percept that contains TAF data, which is stored in the “TAFDataStore”.
Similarly, the “AWSManager” is responsible for extracting the AWS data in the incoming “AWS” percept and storing this data in the “AWSDataStore”. The “AWSManager”
agent also notifies the “Discrepancy” agent by sending an “AWSData” message whenever new AWS data is obtained. In addition, when the “Discrepancy” agent asks for a
new AWS (by sending a “RequestNewAWS” message) the “AWSManager” performs a
“NewAWSRequest” action to request an external weather station for this information.
The “Discrepancy” agent detects discrepancies between TAF and AWS, and sends an
“AlertDiscrepancy” message to the “Alerter” agent if a discrepancy above the threshold level occurs. The “Alerter” agent then processes the alerts and sends them to
subscribed “GUI” agents.
We followed the evaluation process defined in Section 6.1 to assess the effectiveness
of our approach for six change scenarios that cover most of the change types, according to the classification of changes discussed earlier. The change scenarios are: logging
all alerts sent to the forecast personnel (preventative and functionality modification);
adding wind speed alerting (adaptive and functionality addition); implementing a variable threshold alerting (perfective and functionality modification); adding volcanic ash
alert (perfective and functionality addition); having multiple TAF manager agents
(adaptive and functionality modification); and supporting subscription (perfective and
functionality addition). Due to space limitations, we describe only one change in detail,
although we do present results for all of the changes (full details of all the changes can
be found in [40]).
Change: Implementing a variable threshold alerting
Initially, the alerting thresholds are fixed and hard-coded for each data type (e.g. 2
for pressure and 5 for temperature). For example, if a discrepancy between TAF and
AWS values for pressure is above 2, in any of the regions (e.g. Melbourne, Sydney and
Darwin), a warning is generated. However, the forecast personnel wants to be able to
adjust the alerting threshold levels. More specifically, different regions will show alerts
based on different discrepancies, for example in the Melbourne region the pressure
threshold might be 3 whereas for Sydney might be 6. Such values are provided and can
be changed by the user (e.g. airport weather manager) in each region. Hence, a new
requirement is that the forecast personnel should be able to set a threshold for alerting.
This requirement change involves modification of the existing alerting functionality to
respond to a business change. Therefore, we place it in the category of perfective and
modification in the change taxonomy.
This change request implies two major design changes: (1) alerting threshold levels
should be realised as the user’s input, and (2) these data should be stored and used
when discrepancy detection takes place. User inputs are usually represented as percepts.
The current design has only three percepts and each of them has its own purpose:
“TAF” contains data input from the forecaster, “AWS” is input from the automatic
weather sensor and “Change Subscription” is requested from the airport official to
subscribe/unsubscribe to certain information. Therefore, a new percept is needed to
49
represent the user’s request for changing the thresholds. This percept also contains a
new threshold for either temperature or pressure.
Fig. 28 The system overview diagram after the change
Since the “GUI” agent is responsible for interacting with the system’s users, it
should be extended to handle this new percept and the agent thus needs a new plan
which responds to this percept (it can use existing plans but this would violate the
cohesion principle). This plan processes the alerting threshold input contained in the
percept and stores it in a data store. Because existing data stores are not for keeping
the threshold information, a new data store is needed. This data store is written by the
“GUI” agent and its new plan. Finally, since the “Discrepancy” agent is responsible
for detecting discrepancies, it should be changed to meet the second design change
requirement. More specifically, the “Discrepancy” agent’s plans for detecting discrepancies, i.e. “CheckTempDiscrepancy” and “CheckPressDiscrepancy”, will need to use
the new threshold data store to determine if a discrepancy warning should be raised.
The above changes are made to the detailed design artefacts, and result in several
side-effects (see Section 6.1 for a definition of side-effects) in architectural and system
specification artefacts. First, the new percept needs to be handled by a role played
by the “GUI” agent. Second, the new data should be written by a role played by the
“GUI” agent and read by a role played by “Discrepancy” agent.
The designer’s complete plan D thus consists of the following sequence32 . Figures
28 and 29 shows the system overview diagram and the agent overview diagram for the
“GUI” agent after these changes are made. Note that changes are also made to other
diagrams (e.g. side-effect actions are shown in the system roles diagram and the data
coupling diagram, and action steps 12 and 13 are done in the agent overview diagram
for the “Discrepancy” agent (not shown)).
1. Create “SetNewThreshold” percept.
32
Some variations in order are possible, but they do not affect the outcome of the evaluation.
50
Fig. 29 The agent overview diagram for “GUI” agent after the change
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
Create “ChangeThreshold” plan in “GUI” agent.
Link “SetNewThreshold” percept to “GUI” agent.
*Link “SetNewThreshold” percept to “User Interaction” role33 .
Make “SetNewThreshold” percept to be a trigger of “ChangeThreshold” plan.
Create a new “AlertingLevels” data.
Link “GUI” agent to “AlertingLevels” data (write.)
*Link “User Interaction” role to “AlertingLevels” data (write).
Link “ChangeThreshold” plan to “AlertingLevels” data (write).
Link “AlertingLevels” data to “Discrepancy” agent (read).
*Link “AlertingLevels” data to “Check Discrepancies” role (read).
Link “AlertingLevels” data to “CheckTempDiscrepancy” plan (read)34 .
Link “AlertingLevels” data to “CheckPressDiscrepancy” plan (read)34 .
It is noted that changes should also be made to plans’ internals (e.g. “CheckTempDiscrepancy” plan and “CheckPressDiscrepancy” plan). However, such changes are
not made at the design level, but at the code level35 .
Results
The following sequence of actions captures the interaction between the “simulated”
designer and the tool in terms of what actions were performed by the designer and
what actions were proposed and executed by the tool.
1. Create “SetNewThreshold” percept (done by the designer).
After this, the tool identifies three violated constraints since the new percept needs
to be handled by a role, an agent and a plan. The tool then returns 18 repair
options, one of which matches the designer’s plan, resulting in the next four steps
being done by the tool.
2. Create “ChangeThreshold” plan in “GUI” agent (done by the tool) 36 .
33
Side effect actions are marked with an asterisk (*) next to their numbers.
This action takes place in the agent overview diagram for agent “Discrepancy” (not shown).
35 In PDT, such changes can be made in the plan descriptor’s body section in terms of
specifying pseudo code.
36 The tool creates a new plan and asks the designer to provide a name for the new plan.
34
51
3. Link “SetNewThreshold” percept to “GUI” agent (done by the tool).
4. *Link “SetNewThreshold” percept to “User Interaction” role37 (done by the tool).
5. Make “SetNewThreshold” percept to be a trigger of “ChangeThreshold” plan (done
by the tool).
6. Create a new “AlertingLevels” data (done by the designer).
After this, the tool identifies violated constraints relating to the requirements of
a new data being needed to be accessed (either written or read) by an agent and
plan. The tool then returns 26 options, three of which match the designer’s plan.
The first option suggests steps 7-9, the second option recommends steps 10-12, and
the third option suggests steps 10, 11, and 13. The designer chooses one of these
options, for example the first option, resulting in the next 3 steps being done by
the tool38 .
7. Link “GUI” agent to “AlertingLevels” data (write) (done by the tool).
8. *Link “User Interaction” role to “AlertingLevels” data (write) (done by the tool).
9. Link “ChangeThreshold” plan to “AlertingLevels” data (write) (done by the tool).
After this step, the design is consistent because the new data is now accessed by
an agent, a plan and a role. Therefore, the next step is done by the designer.
10. Link “AlertingLevels” data to “Discrepancy” agent (read) (done by the designer).
After this, the tool returns 3 options, two of which match the designer’s plans.
The first option suggests steps 11 and 12, whilst the second option suggests steps
11 and 13. The designer chooses one of these options, and manually performs the
remaining step.
11. *Link “AlertingLevels” data to “Check Discrepancies” role (read) (done by the
tool).
12. Link “AlertingLevels” data to “CheckTempDiscrepancy” plan (read) (done by the
tool).
13. Link “AlertingLevels” data to “CheckPressDiscrepancy” plan (read) (done by the
designer).
Overall, the tool is able to identify and perform 9 out of the 13 actions (including
all 3 side-effect actions) in the change plan.
Table 1 shows the results of evaluation for all of the change scenarios. Each change
scenario has a row, where the entries marked with numbers show the situation for
the nth step of the user’s plan (D). An entry of the form nm indicates that the user
performed this step39 and the tool returned n options (i.e. O = {C1 . . . Cn }), where
m of the Ci were compatible with D. There are several cases where the value of m
is 0, corresponding to the fact that none of the returned options matched with D.
Therefore, the user had to continue performing the next step.
An entry “T” (or “Ts ”) indicates that an “obvious” (or “side-effect”) action is
done by the tool, that is, it is part of a selected repair plan from an earlier step. An
entry “U” (or “Us ”) indicates that the user performs this “obvious” (or “side-effect”)
action; this occurs in several places where D is non-empty (i.e. there are more changes
to be done), but the design is consistent, and in this situation the tool cannot assist the
user. The final column gives the value of the metric M =| C | / | D |. Overall, for all
37
Side effect actions are marked with an asterisk (*) next to their numbers.
If the designer could select and execute more than one option at a time then in this
particular change case, the tool is even more effective: the designer could select all three
matching options, and consequently have all the remaining steps (7-13) be done by the tool.
39 The form n s indicates this is a side-effect action step.
m
38
52
Change
1. Wind Speed Alert
2. Variable Threshold
3. Volcanic Ash
4. Logging (alt 1)
4. Logging (alt 2)
5. Multiple TAFs
6. Subscription
1
91
181
11
241
241
10
220
2
T
T
Ts
T
T
21
40
3
T
T
Ts
Ts
Ts
Ts
61
4
T
Ts
191
T
T
Ts
T
U
270
T
5
U
T
T
181
T
263
T
6
T
Ts
T
Ts
7
T
Ts
Ts
T
8
T
Ts
T
261
9
T
44s
32
T
10
41
Ts
Ts
Ts
11
Ts
Us
T
T
12
T
Ts
U
250
13
Us
181
14
T
15
T
16
T
17
T
18
U
19
U
20
Table 1 Summary of evaluation results derived from the six change scenarios
M
60%
69%
65%
75%
60%
62%
57%
53
of the changes the average value of M is approximately 64%, that is, the tool performs
on average nearly two-thirds of the actions. This result implies that compared with
maintenance without our tool, the user would have to perform roughly three times as
many change actions.
Furthermore, for the first five changes, the tool is able to identify and perform
all the side-effect actions in the change plan. For change 6, the tool identifies more
than half of the side-effect actions. This result implies that without using our tool the
user would have to carefully examine the design in order to find and carry out those
side-effect actions.
In terms of the number of options returned by the tool each time it is used, there
are several cases where this number is relatively large (e.g. 26 options after step 6 in
change 2). However, due to our mechanism of prioritizing repair plans presented to the
user, (i.e. repair plans that affect primary changed entities appear first), the matching
option(s) is listed first, which is easily identified by the designer. Nonetheless, it is ideal
not to overwhelm the designer with such a large number of options and this issue is
part of our future work.
Overall, these results imply that the tool is effective in terms of helping the designer
identify and perform actions according to a change plan. In particular, the tool is able
to identify required side-effect actions, which tend to be missed by the designer. In the
next section, we will investigate the efficiency and scalability of our CPA tool.
6.3 Efficiency and Scalability Evaluation
In the previous section, we have presented the results of an effectiveness evaluation
which shows that our framework is able to produce good recommendations for a real
application and real change scenarios. However, another key issue that we need to
address is the efficiency of our approach.
Let us begin by considering the different aspects of the process of doing change
propagation. Generation of repair plans is performed at “compile” time, and is thus
not an efficiency issue. At runtime, the following steps are performed:
1. Check the design for consistency with respect to provided OCL constraints and a
meta-model.
2. Generate repair plan instances for violated constraints.
3. Compute the cost of different repair options.
4. Execute the selected repair plan, where selection is either choosing the cheapest
plan or asking the user (if there is more than one cheapest plan).
The fourth step is executing a selected repair plan which is simply a matter of
running the plan and performing the changes, and this is quite cheap, since costs have
already been computed for the plan and all sub-plans. Therefore, it is not necessary
to examine the fourth step in the efficiency analysis. As we discussed in Section 5,
cost calculation (step 3) also involves consistency checking (step 1) and plan instance
generation (step 2) because it simulates the change propagation process in doing lookahead planning. As a result, the efficiency of the first three steps are considered together
in the third step. In addition, the third step is critical since it has the major impact
on the performance of our approach. Therefore, the focus of our efficiency analysis is
on the cost calculation algorithm. We need to assess how practical the algorithm is,
specifically, how well does it scale to larger problems?
54
Ideally, we would assess scalability directly by assessing our approach with large
designs. However, the efficiency of the approach depends not just on the size of the
model, but also on the primary changes made, so we would need large Prometheus designs along with well-motivated change scenarios and primary changes. Unfortunately,
we do not have any large Prometheus designs with associated well-motivated changes
that we could use to perform a direct assessment of scalability.
So, instead we adopt a more indirect approach, accepting that the conclusions that
we draw are necessarily somewhat weaker. The approach that we adopt to explore
our approach’s scalability is to firstly assess the efficiency of the approach on the
Weather Alerting System, with the well-motivated change scenarios that were used in
the effectiveness evaluation in the previous section. We then extend this analysis in a
number of ways in order to answer the question: what happens when the model grows?
6.3.1 Efficiency Analysis using the Weather Alerting Application
We begin by assessing the efficiency of the cost calculation algorithm using the Weather
Alerting System. Whilst this system is not large, it is a real system, and, as discussed
in the previous section, the change scenarios are also realistic. We therefore conducted
experiments on the design of the weather alerting system that we used to assess the
effectiveness of our approach40 . We report here on the six change scenarios that are
discussed in Section 6.2.
The existing design model contains 75 elements, and 64 consistency and multiplicity
constraints are considered. We report here the performance of the algorithm that has
pruning41 and the plan sorting heuristic. Table 2 gives information for each time the
tool is invoked (column 2) during every change scenario (column 1), including the
number of constraint instances in the initial model (column 3), how many of these
were initially violated (column 4), and some statistics about the goal-plan tree: the
number of goal nodes (column 5) and plan nodes (column 6) and the tree’s depth
(column 7). Table 3 gives efficiency measurements for the same invocations (column
2) and change scenarios (column 1). The times measured include the total execution
time in milliseconds (column 3), and the execution time spent evaluating constraints
(column 4, also in milliseconds). Finally, column 5 shows the percentage of the execution
time spent evaluating constraints (rounded to a whole percentage). It is noted that
“invocations” of the tool correspond to the entries in Table 1 that have numbers. For
instance, in the second change scenario (Variable Threshold) the tool is invoked three
times at steps 1, 6, and 10.
These results show that, despite a worse case exponential complexity, the algorithm
is clearly practical for the Weather Alerting System, since execution time (with an unoptimised research prototype), was mostly a few seconds (mean = 5.7 seconds; median
= 3.9 seconds), with only one case taking more than 8 seconds. We might reasonably
also conclude that it would be likely to be practical for other similarly-sized designs.
40 All experiments were performed on a laptop running Windows XP and Java v1.5.0 06,
with an Intel Centrino 1.73Ghz CPU and 1GB RAM. Times (reported in milliseconds) are
an average of 5 runs (we ignored the first run, since it was inconsistent due to JVM startup).
For each run we collected the number of goal and plan nodes explored, the total time (broken
down into the constraint evaluation time, time taken to update models, and other time), and
the number of constraint instances.
41 The no-pruning case did not terminate as the algorithm fell into a cycle.
55
An interesting observation (right-most column in Table 3) is that the majority
of the execution time is spent checking constraints (minimum: 51%, maximum: 92%,
mean: 85%, median: 88%). The only cases where less than 80% of the time was spent
checking constraints were scenarios Multiple TAFs - step 1 and Subscription - step 4
where the evaluation time took 50% and 64% respectively of the total execution time.
In other words, checking whether constraints hold is a bottleneck in the process, and
any improvements to how constraints are checked could have a significant impact on
the efficiency of the overall process. We return to this issue in Section 6.3.5.
6.3.2 Effects of Increasing the Model Size
So, our results show that the algorithm is practical for the Weather Alerting System.
But what happens when the model becomes larger? How is the runtime affected? Suppose that the design model is made larger. We expect this to affect the cost calculation
algorithm in two key ways.
Firstly, a larger model means that the goal-plan tree is wider. This is because plans
which have a context condition of the form x ∈ Set(type) result in a plan instance for
each model element of the appropriate type. So making the model larger means that
Set(type) for various types becomes larger, and the related plans yield more options.
Secondly, an increase in the model size will result in more constraint instances, since
constraints are specified over certain types, and a constraint instance is generated for
each model element of the appropriate type. For example, a constraint of the form
“Context Percept inv:” will result in a constraint instance for each percept in the
model. The increase in number of constraints is significant, since typically 80-90% of
execution time is spent on checking constraints.
On the other hand, we do not expect that increasing the model size will result
in an increase to the depth of the goal-plan tree. This is because the depth of the
tree corresponds to the number of violated constraints, and there is evidence that
constraints are local in scope [44], in other words, a given constraint is only affected by
a certain number of model elements, and this number does not appear to grow as the
model grows. This means that given a initially consistent model which has had some
initial changes applied to it, the number of initially violated constraints is not expected
to depend on the size of the model.
In order to confirm these expectations we have run a simple experiment. We have
taken the first change scenario and rerun it with a smaller model. The first change
scenario — adding wind speed alerting — involves changes to the Discrepancy agent
only, and so we have created a smaller model which leaves out the Alerter and GUI
agents, which is done by having the Discrepancy agent directly send the user a message
(using the PrintMsg action) instead of sending an AlertDiscrepancy message. This
smaller model has 55 elements (compared with 75 in the original, larger, model).
Considering the first prediction, that the goal-plan tree is wider, we find that the
depth is unchanged when the model shrinks, but that the number of goal and plan
nodes does change with the model size. Since the tree depth is the same and the
number of nodes is higher, it follows that the width of the tree grows with the size
of the model, as expected. Specifically, we have that for the small model (55 model
elements) the cost calculation involves a goal-plan tree with 279 goal nodes and 539
plan nodes, whereas the larger model (75 model elements) involves a goal-plan tree
with 475 goal nodes and 1034 plan nodes.
56
Considering the second prediction, we observe that the number of constraint instances in the model has grown from 193 (small model) to 306 (large model). Furthermore, the overall number of constraints evaluated has gone from 10,984 (small model)
to 26,124 (large model). This growth is due to two factors: there being more constraints
to check at each invocation of the constraint checker, and there being more invocations
of the constraint checker (60 vs. 97) due to increases to the tree width. In other words,
as expected, the number of constraints grows as the model grows.
Finally, as noted earlier, the depth of the tree has, as expected, not been affected
by the change in model size.
We now consider in turn each of these consequences of increasing the model size in
order to gauge the effects of (e.g.) wider trees. In doing so we also measure the benefit
of pruning (which was ignored in the complexity analysis).
6.3.3 The Effect of Wider Trees
In order to assess the effect of having wider trees we run an artificial experiment (i.e.
not using the Weather Alerting Application).
Our simple artificial setting involves a design that has some number of roles, and
some number of agents. All of Prometheus’ consistency constraints and multiplicity
constraints in the Prometheus metamodel are used. However, the only constraint that
will be violated in this artificial setting is the one that states that all roles should be
associated with an agent: Context Role inv c : self.agent→size()≥ 1. This constraint
is translated to the following repair plans44 , where sa is short for self .agent
P1 ct (self ) ← for each i ∈ {1 . . . (1 − size(sa))} !ct0 (self )
P2 ct0 (self ) : x ∈ Type(sa) ∧ x 6∈ sa ← Add x to sa
P3 ct0 (self ) ← Create x : Type(sa) ; Add x to sa
In order to explore how the algorithm performs as the number of repair plan instances is increased we have a design with a single role and N agents. This gives a
single violated constraint to fix, and by increasing N we increase the number of repair
options (since there is always a single instance of P3, but there are N instances of P2,
one for each agent). In other words, increasing N allows us to increase the width of
the goal-plan tree.
We have run our algorithm for a range of values of N , measuring the running time
— both total and constraint checking — as well as the impact of pruning, i.e. how
many nodes the algorithm avoided having to explore through pruning.
Figure 30 shows the runtime (in milliseconds). In this experiment pruning made
no significant difference, since there is nothing to distinguish between the agents (the
results in the graph are from the no-pruning run). As can be seen in Table 4, most of
the time was taken up with checking for violated constraints in the repair scope (line 20
of Figure 22 on page 39). However, more importantly, even for a tree that’s quite wide
(N = 160), the runtime remains reasonable, specifically, the execution time appears
to be polynomial in the width of the tree (as indicated by the complexity analysis in
Section 5.5).
44
The translation is not optimal because it also caters for constraints of the form size() ≥ n.
57
Change
1. Wind Speed Alert
2. Variable Threshold
3. Volcanic Ash
4. Logging42
5. Multiple TAFs
6. Subscription
Invocation
1
1
2
3
1
2
3
4
5
1
1
2
3
4
5
1
2
3
4
Constraint
Instances
306
307
320
320
322
331
344
346
346
306
304
304
346
346
317
306
311
311
335
Initially
Violated
Constraints
4
3
2
2
1
3
2
2
3
2
1
1
2
3
2
2
3
5
3
Goal
Nodes
475
674
1143
117
61
2848
2161
3539
4713
2794
21
79
3593
5665
111
2764
2884
3514
2723
Table 2 Problem characteristics for the six change scenarios
Change
1. Wind Speed Alert
2. Variable Threshold
3. Volcanic Ash
4. Logging43
5. Multiple TAFs
6. Subscription
Invocation
1
1
2
3
1
2
3
4
5
1
1
2
3
4
5
1
2
3
4
Total
Time
(ms)
860
1297
1578
250
265
4984
2437
5234
52203
4922
93
359
5359
7984
313
3859
4438
6203
5609
Eval
Time
(ms)
783
1141
1391
218
233
4467
1906
4739
46016
4249
47
312
4907
6537
266
3545
4001
5485
3630
%
91%
88%
88%
87%
88%
90%
78%
91%
88%
86%
51%
87%
92%
82%
85%
92%
90%
88%
65%
Table 3 Efficiency results from the six change scenarios
Agents (N )
160
80
40
20
10
Runtime (ms)
1964
517
135
39
19
Constraint
Checking (ms)
1915
492
123
33
11
Table 4 Results for the cost algorithm for wider trees
%
98%
95%
91%
85%
58%
Plan
Nodes
1034
1093
1808
185
91
4830
3609
5602
7755
4355
34
118
5698
9458
179
4382
4795
6512
4369
Tree
Depth
19
27
43
15
12
27
43
31
43
43
11
21
31
43
17
31
45
43
35
58
2000
total time
1800
eval time
Time (milliseconds)
1600
1400
1200
1000
800
600
400
200
0
1
10
20
40
80
160
Number of agents
Fig. 30 Performance of the cost algorithm for wider trees
6.3.4 The Effect of Deeper Trees
We now briefly consider the algorithm’s performance as the number of constraints, and
consequently the depth of the tree is increased. We use the same artificial model as in
the previous section, but instead of having one role and N agents, we have N roles
and one agent. Since each role has a single violated constraint, this gives N violated
constraints, and consequently a tree of depth N .
In this case pruning made a significant difference: for N = 8 without pruning the
algorithm considered 10,590 goal nodes and 31,736 plan nodes taking a total of 21,594
milliseconds, whereas with pruning the algorithm considered 1,673 goal nodes and 3,753
plan nodes taking 1,360 milliseconds. On the other hand, the heuristic of sorting plans
by their basic cost (line 5 of Figure 23 on page 40) made no difference. As Figure 31
shows45 , the algorithm (with pruning) performs well for N = 8. In this experiment the
evaluation time was a smaller component of the total time. For instance, for N = 8
with pruning the total execution time was 11,453ms, of which 5,597ms was taken in
constraint evaluation, i.e. 48.9% the execution time. For the cases of 8 and 4 roles with
pruning, these two times were 1,375ms vs. 611ms and 46ms vs. 15ms respectively.
Overall, the conclusion is that the algorithm does not scale well to a deeper tree.
However, it must be noted that, as argued in Section 6.3.2, we do not expect to see
deeper trees as the model size increases. In other words, the effects seen in this section
are not expected to be seen as models grow. However, this experiment has indicated
that pruning can provide significant benefits (e.g. 21.6 seconds vs. 1.4 seconds for
N = 8).
45
For N = 10 the no pruning case ran out of memory.
59
25000
pruning
eval time (pruning)
no pruning
Time (milliseconds)
20000
15000
10000
5000
0
1
2
4
Number of roles
8
10
Fig. 31 Performance of the cost algorithm in the second experiment
6.3.5 Constraint Checking
We now consider the role that constraint checking plays in the algorithm’s efficiency,
and its impact on scalability. We have observed earlier (Table 3) that typically 8090% of the execution time is spent checking constraints. In order to better understand
exactly how this time is spent, we gathered additional data. Specifically, we measured, for each invocation of the tool, the number of times that the constraint checker
was invoked, and the number of individual constraint instances that were checked
(see the columns labelled “Constraint Checker Invocations” and “Constraint Instances
Checked” in Table 5). As can be seen from Table 5, across the change scenarios, each
time the constraint checker is invoked, it checks on average around 270-353 constraints
(see the column “Constraints per Invocation”).
However, there is room for significant improvement in the number of constraints to
be checked at each invocation (currently 270-353). Repairing an individual constraint46
does not tend to make wide-sweeping changes to the model, because design constraints
have a local scope which does not depend on the model’s size [44]. This means that there
is scope for improving efficiency: every time the constraint checker is invoked, it checks
all constraints. However, since only part of the design model will have changed since
the previous invocation, this is not necessary. In other words, most of the constraints
will not have been affected by the changes made in repairing an individual constraint,
and thus do not to be re-evaluated.
Indeed, there is a technique, termed “instant constraint checking” [44] which deals
with this problem. It tracks which entities are used to evaluate each constraint, and
then uses this information to work out which constraints might be affected by a change
to the design, and only re-evaluates these constraints. This technique has been shown
to scale to large models.
46
Recall that this does not include repairing other constraints.
60
Considering the efficiency of constraint checking in our algorithm, we would expect
that implementing Egyed’s “instant constraint checking” technique would significantly
improve efficiency because at each invocation of the constraint checker, instead of
checking 270-353 constraints, we would typically only need to check a much small
number. For example, considering the first change scenario, the first change to the
model, creating a new internal message, only has five constraints that may be affected
by the change (compared with the 306 constraints that apply to the initial model).
Returning to scalability, and the expected effects of increasing the model’s size,
recall that increasing the model size resulted in an increase to the number of constraints
being evaluated due to two factors: the tree being wider (i.e. there being more options
to check), and there being more constraints to check at each point. Based on Egyed’s
finding that the scope of a constraint does not depend on the model size, we therefore
expect that the second factor would be eliminated by the “instant constraint checking”
optimisation: only checking the constraints that are known to be affected by a given
change means that the number of constraints to be checked at each invocation is not
affected by the size of the model. Checking 5 constraints, as opposed to 300 or so,
can reasonably be expected to be considerably faster, and since the evaluation time is
dominated by constraint evaluation time (80-90%), reducing the constraint evaluation
time will have a significant impact on the overall runtime of the algorithm.
6.3.6 Scalability Conclusions
We have argued, and provided some experimental evidence, to show that an increase
in model size will result in a wider goal-plan tree, and in there being more constraint
instances, but that an increase in the size of the model will not result in a deeper
goal-plan tree.
This means that the evaluation time is polynomial in the size of the model. Furthermore, part of the growth in the execution time is due to there being more constraint
instances to be checked at each invocation of the constraint checker. We have argued
that this growth can be avoided by implementing Egyed’s “instant constraint checking”
technique, and that in fact the current constraint checking costs will be significantly
reduced by adopting this technique.
Our evaluation has also shown that pruning can have significant benefits.
Whilst our conclusions, as noted earlier, are necessarily somewhat tentative, we
believe that the arguments and evidence presented in this section gives us reason to be
justifiably positive about the scalability of the cost calculation algorithm, and hence
the overall approach.
6.4 Threats to validity
Evaluating software engineering tools and techniques is challenging, and our evaluation
does have a number of limitations that need to be noted.
The comments below relate primarily to the effectiveness evaluation. The efficiency
evaluation is in a sense secondary, since there is no point in having an efficient solution
that is not effective. The main validity concern for the efficiency evaluation is that the
models used were artificial and tiny, which is why we also looked at the efficiency of the
algorithms when used with the weather alerting application, which whilst still small,
is at least a real system (i.e. not artificial).
61
Change
1. Wind Speed Alert
2. Variable Threshold
3. Volcanic Ash
4. Logging47
5. Multiple TAFs
6. Subscription
Invocation
1
1
2
3
1
2
3
4
5
1
1
2
3
4
5
1
2
3
4
Constraint
Checker
Invocations
97
133
157
9
6
155
159
518
714
145
4
27
431
776
16
431
673
741
190
Constraint
Instances
Checked
26,124
41,184
50,700
2,925
2,122
51,898
55,300
181,984
250,976
44,784
1,236
8,343
133,730
243,513
4,830
133,730
208,992
231,800
64,320
Constraints.
per
Invocation
269.3
309.7
322.9
325.0
353.7
334.8
347.8
351.3
351.5
308.9
309.0
309.0
310.3
313.8
301.9
310.3
310.5
312.8
338.5
Table 5 Additional Details on the efficiency results
Internal validity
There are a number of issues relating to internal validity. Perhaps the biggest issue
is that we have not conducted an evaluation with human software engineers, but instead have used a “simulated user” that follows a well defined maintenance process.
The maintenance process that we followed is well motivated by the research literature,
but as far as we are aware, it has not been established that the process is an accurate characterisation of the process followed by real software maintainers. Performing
an evaluation with human software maintainers is thus arguably the highest priority
direction for further work.
A second (potential) issue is that the changes being made to the system (e.g.
new requirements) may not represent reasonable realistic changes. However, we have
intentionally selected changes that arose in the actual maintenance and evolution of
the deployed weather alerting system, and so these changes clearly do represent real
changes in a real system.
Finally, we have used a metric that focuses on the number of changes done by the
tool. However, is the number of changes a good metric for effectiveness? We argue
that it is a reasonable measure, because making changes takes time and effort, and
is error prone. Thus saving the user from making changes is beneficial. However, we
do recognise that not all changes are equally costly to make, and in our analysis we
do distinguish between changes that are related to the designer’s “context” and that
are likely to be recognised and performed, and those “side effect” changes that need
to be done but are in a different context, and hence are more likely to be missed.
Overall, we believe that despite the number of changes not being a perfect metric, it
is nonetheless a reasonable (if somewhat rough) proxy, and that the results obtained
do have validity. Note that performing an evaluation with human software maintainers
62
would allow other, more direct, metrics to be used, such as time taken, and number of
errors made.
External validity
Another issue is related to the generalization of the results of our study to other
contexts. Our evaluation took place in a certain context, specifically we considered a
certain set of changes to a weather alerting application which had been designed using
the Prometheus methodology. Each of these — choice of changes, choice of application,
and choice of methodology — raise a question about the external validity of our results.
In order to attempt to ensure that our evaluation was not specific to given changes
we used a number of (real) changes, and, by mapping the changes against a taxonomy
(as described earlier in Section 6) we were able to ensure, and demonstrate, that a range
of change types was considered, and that therefore there is a case for the generality of
our results with respect to the selection of changes.
Regarding the choice of application, the weather alerting application is not a large
application and thus it is not clear that our results generalise to large systems. However,
its design is fairly typical for multi-agent systems, and we do not see any pecularities in
its design that would prevent our results from generalising to other similar-sized agent
systems.
The Prometheus methodology is, in some ways, fairly similar to other agentoriented software engineering (AOSE) methodologies such as O-MaSE [50], and we
would expect that our results would generalise to other AOSE methodologies. However, there are clear, and significant, differences between AOSE and object-oriented
methodologies, and so it is less clear that our approach is applicable with objectoriented methodologies. To address this question we have also applied our approach
to a representative object-oriented approach, namely UML [27]. This demonstrated
that our approach is applicable to UML; that, at least for a single change scenario,
it was able to provide good support; and that the algorithm’s efficiency was sufficient
for a small to medium design. However, the UML evaluation was limited, and further
evaluation remains desirable.
Our approach assumes that the software maintainer is familiar with the system
when they perform changes in its design since they need to make some primary changes
and select change propagation options. We believe that this is a reasonable assumption
in practice, and in cases where a designer is modifying a design that they are not
familiar with, they will typically perform exploratory activities to comprehend the
design before they begin making changes. Finally, how well our approach performs also
depends on the number of constraints and their quality. A large number of constraints
may make the tool slow to respond because more time is spent on checking constraints
and working out different plans for fixing violated constraints. On the other hand,
having more constraints tends to make the tool more helpful in terms of identifying
necessary secondary changes that are possibly missed by the designer. With regard to
the quality of constraints, our approach assumes that constraints are correctly specified
and that they come from a reasonable source (as discussed in Section 3). In particular,
it is worth noting that for UML, there are standard constraints that are defined in the
UML standard.
63
6.5 Evaluation Conclusions
The evaluation demonstrated that the approach is effective if a reasonable amount of
primary changes is provided. However, in some cases there may be a large number of
repair options returned by the tool, which makes it hard for the user to select which
one to use. In practice this can be dealt with by ignoring the tool’s list of options
and performing further changes (which often provides the tool with information that
enables it to return fewer options). A better approach which needs to be investigated
is reducing the number of options by “staging” questions. Suppose we need to link a
percept with a plan and with an agent, then instead of presenting a set of options,
where each option specifies both a plan and an agent (which gives a cross product),
specify first the choice of agent, and then based on that choice ask for a choice of
(relevant) plan.
With regard to efficiency, we have shown that our algorithm is practical for small to
medium designs. We have examined scalability and argued that increasing the model
size will result in wider, but not deeper, trees, and that our approach is thus polynomial
in the model size. Furthermore, since a large proportion of the time (80-90%) is spent
evaluating constraints, there is scope for using “instant consistency checking” [44] to
significantly improve the efficiency of our algorithm.
Overall, our conclusion is positive since the evaluation shows that the approach is
able to (on average) perform (approximately) more than two-thirds of the actions in
maintenance plans, across a number of changes motivated by experience with a real
application.
6.6 Basic cost analysis
As discussed in Section 6, the evaluation used a fixed set of basic action costs. We have
argued that the costs chosen are reasonable, but there are still some questions about
the selection of basic action costs. Specifically, we want to know how the selection
of different basic action costs might influence the recommended repair options. More
importantly, from a user’s perspective, we want to know how a user might decide which
basic action costs to use.
In this section we aim to shed some light on these questions. We have performed
further experiments to explore a few scenarios where the outcome is sensitive to the
basic costs, and based on these experiments, provide initial answers to the questions.
Our experiments consider two scenarios. In each scenario we investigate the effects
of varying the basic costs.
6.6.1 Scenario 1
The first scenario is where the designer’s plan (i.e. D) involves the creation of some new
elements. We have chosen the second change (i.e. implementing a variable threshold
alerting) for the weather alerting system to explore this case since the designer’s plan
for this change involves the creation of new elements (i.e. steps 1, 2, and 6 in the
designer’s plan) and the change is described in detail (in Section 6.2). With the current
configuration (i.e. the cost of creation is 0, connection and modification are 1, and
disconnection and deletion have cost 3), steps 1 and 6 were done by the designer while
step 2 (i.e. create “ChangeThreshold” plan in “GUI” agent) was done by the tool.
64
In our experiment we re-ran the change propagation with different creation costs:
0, 1, 2, 3, . . . , 9, 10. We found that if the creation cost is non-zero, then the tool prefers
repair options that involve the connection of existing elements over repair options that
involve the creation (and subsequent connection) of new elements. This is because, for
non-zero creation cost, the cost of connecting to an existing element is less than the
cost of creating a new element and then connecting to it.
Specifically, for all of the cases where the creation cost was greater than zero we
had the same behaviour: after step 1 (create “SetNewThreshold” percept) was done by
the designer, the tool returned two options, both of which suggested making the “SetNewThreshold” percept be a trigger of an existing plan (e.g. “HandleNewRequestAWS”
or “SendAWSDataToSubscribed”). Therefore, neither of the two options returned by
the tool match the designer’s plan since they do not include the creation of a new plan
(i.e. step 2) and setting the percept to be a trigger of this new plan.
6.6.2 Scenario 2
The second scenario is where the designer’s plan involves disconnection of elements. We
have chosen change 5 (i.e. having multiple “TAF Manager” agents) to explore this case
since the designer’s plan has a few disconnections (steps 1–4). The current configuration
for this scenario has disconnection cost being equal to the cost of connection (i.e. 1).
With this configuration the tool has done steps 3 and 4 while the designer needed to
do steps 1 and 2.
In our experiment we re-ran the change propagation with disconnection costs of 1,
1.1, 1.2, 1.5, 2.0, 2.5 and 3.0. We found that if the disconnection cost is greater than
1, then the tool prefers to propose repair options that do not include disconnections
Specifically, after step 1 (done by the designer), we invoked the tool and it returned
one option which does not match with the designer’s plan. The designer needed to
do step 2 (similarly to the initial experiment) and then re-invoke the tool. The tool
then returned one option, which has two connections and still does not match with the
designer’s plan. This is different from the outcome we obtained with the initial setting,
where, after step 2, the tool returned two options, one of which involves disconnections
and matches the designer’s plan, resulting in the next two steps 3 and 4 being done by
the tool.
6.6.3 Conclusions Regarding Basic Costs
A key observation is that in our framework, the basic costs are simply used to differentiate between action types. Specifically, the recommended repair options are affected
not so much by the precise values of the basic action costs (e.g. 1.2 vs. 1.5), but by the
relative cost of different action types (e.g. is connection more or less expensive than
creation). In other words, in the experiments that we have run, we found a low level of
sensitivity to the precise costs. In the first scenario the change in behaviour occurred
when the cost of creation was increased from 0 to 1, increasing it further (up to 10)
made no difference. In the second scenario, we found that the change in behaviour occurred when the cost of disconnection was increased from 1 to 1.1, and that increasing
it further made no difference.
Therefore, in terms of advice to users, we suggest that one can think about the
costs not in terms of their numerical values, but in terms of encoding a preference
ordering between action types. For example, if the user decides that for a particular
65
maintenance scenario they would prefer options that recommend connecting with existing entities, rather than creating new entities, then this preference can be decided
upon, and encoded by setting the cost of creation to be non-zero. The precise value
doesn’t appear to matter. Indeed, it may be desirable to specify the values indirectly
in terms of preferences, rather than directly in terms of numbers; but this is a topic
for future work.
Ultimately, a more detailed evaluation of the sort of changes that are done would
be needed to determine whether the costs proposed are, in general, reasonable; and in
particular, to establish whether there is a reasonable “one size fits most” set of basic
costs, or whether the user may need to experiment with different costs. In the latter
case, there may be scope for exploring mechanisms that learn basic costs from the
user’s actions (i.e. selection of repair options).
7 Related Work
As the process of change propagation aims to maintain consistency within software, our
closely related work is in the area of consistency maintenance, i.e. detecting and resolving inconsistencies caused by changes on a wide range of software artefacts including
design, specifications, documentation, source code, and test cases. Various techniques
and methods have been proposed in the literature to address different activities of
the consistency management process including: detecting overlaps between software
models, detecting inconsistencies, identifying the source, the cause and the impact of
inconsistencies, and resolving inconsistencies [35].
The influential Viewpoints framework [51] supports the use of multiple perspectives in system development by allowing explicit “viewpoints” containing partial specifications, which are expressed and developed using different representation styles and
development strategies. In their framework, inconsistencies arising between individual
viewpoints are detected by translating into a uniform logical language. Such inconsistencies are resolved by having meta-level inconsistency handling rules. As UML has
become the de facto notation for object-oriented software development, most research
work (e.g. [52, 53]) in consistency management has focused on problems relating to consistency between UML diagrams and models. Such approaches have been advocated
with the recent emergence of model-driven evolution [19]. Several approaches (e.g. [54])
strive to define fully formal semantics for UML by extending its current metamodel
and applying well-formedness constraints to the model. Other approaches transform
UML specifications to some mathematical formalism such as Petri-Nets [55], or Description Logic [56, 57]. The consistency checking capabilities of such approaches rely
on the well-specified consistency checking mechanism of the underlying mathematical
formalisms. However, it is not clear to what extent these approaches suffer from the
traceability problem: to what extent can a reported inconsistency be traced back to
the original model. Furthermore, the identification of transformations that preserve
and enforce consistency still remains a critical issue at this stage [55].
Recently, [44] proposed a very efficient approach to check inconsistencies (in the
form of consistency rules) in UML models. His approach scales up to large, industrial
UML models by tracking which entities are used to check each consistency rule, and
then using this information to determine which rules might be affected by a change,
and only re-evaluate these rules. This work is complementary to our work: it provides
66
a rapid means of checking consistency (which supports our approach), but does not
tackle the issue of how to restore consistency.
There are approaches that go further than detecting inconsistencies. Several approaches provide developers with a software development environment which allows
for recording, presenting, monitoring, and interacting with inconsistencies to help the
developers resolve those inconsistencies [58]. Other work also aims to automate inconsistency resolution by having pre-defined resolution rules (e.g. [59]) or identifying
specific change propagation rules for all types of changes (e.g. [60]). Such rules can
be formally defined following a logic-based approach (e.g. [59] used Java Rule Engine
JESS, or [57] used Description Logic, or a graph-based approach such as the graph
transformations used in [61]). However, these approaches suffer from the correctness
and completeness issue since the rules are developed manually by the user. As a result,
there is no guarantee that these rules are complete (i.e. that there are no inconsistency
resolutions other than those defined by the rules) and correct (i.e. any of the resolutions can actually fix a corresponding inconsistency). In addition, a significant effort is
required to manually hardcode such rules when the number of consistency constraints
increases or changes.
Recently, Egyed [23] proposed an approach for fixing inconsistencies in UML models
which uses model profiling to locate choices of starting points for fixing an inconsistency
in a UML model. His work does not analyse consistency rules but rather observes
their behaviour during evaluation (i.e. model profiling) and consequently considers
consistency rules as black-boxes. The main limitation in his work is that his approach
does not provide options for how to repair inconsistencies, but only suggests starting
locations (entities in the model) for fixing the inconsistency. Furthermore, the list of
locations identified by his approach may contain false positives, i.e. locations for which
there is no actual concrete fix that resolves a given inconsistency. These two issues
have been addressed in more recent work [62] which is limited in that it only considers
a single change at a time and does not consider the creation of model elements. By
contrast, our approach is provably complete, considers a number of changes at once,
and caters for the creation of model elements.
Similarly the work of Briand et al. also looks at how to identify impacted entities
during change propagation using UML models [60]. It defines specific change propagation rules (also expressed in OCL) for a taxonomy of changes. However, the major
difference between both of these works and ours is that their approaches do not provide options to repair inconsistencies, but only suggest starting points (entities in the
model) for fixing the inconsistency.
There has been a range of work on automatic repair of constraints in the area of
databases. In these approaches, production rules are often used to represent various
ways of fixing a given violated constraint. A rule premise contains conditions and if the
premise is satisfied, actions specified in the consequent of the rule can be executed. One
key difference between their work and ours is that we generate abstract, structured,
repair plans that are instantiated at runtime. The work in [63] proposes an approach to
generate repair actions for a given constraint in the context of active integrity maintenance of an object-oriented database. Similarly to our work, their approach also works
automatically from a declarative specification but it can only handle simple Boolean
constraints in Skolem normal form. The approach described in [64] also supports automatic generation of production rules which can take constraints in more complicated
forms, i.e. existential and universal quantifications. In addition, they also allow the user
to modify the generated rules, as well as the order on constraints to be checked and
67
the order on rules to be executed. In our framework, the user is also able to modify
generated repair plans but they do not need to take care of the order of constraints or
rules to be executed. In fact, as we show in Section 5, with regard to the specific problem that we are trying to solve, the order in which constraints are fixed does not affect
the outcome. Similar to our approach, they also involve user intervention in selecting
repair actions but they use a low-level text-based mechanism and no tool support is
discussed.
The work described in [65] covers a similar class of constraints (as in [64]) but they
specifically raise the issue of user involvement in the process of resolving constraint
violation, i.e. allowing the user to choose between possible alternative repairs that
reflect the user’s intention or the application requirements. They propose several repair
strategies for reducing the set of repair actions presented to the user for selection such
as minimizing the number of changes necessary to the database or the transactions
triggering the repairs. These strategies are, however, analytical and generic (i.e. not
developed based on domain knowledge), which differ from our repair plans. Nonetheless,
we also share the same concern as one of their repair strategies in terms of only including
repair plans which make changes necessary to the application model. There has also
been work on deriving repair actions in deductive databases. For example, the work in
[66] proposes a system that generates actions from closed, range-restricted first order
logic formulae. Since their repair algorithm depends on the rules of the database and
the closed-world assumption, it can automatically find repairs for violated existential
formulae without user intervention. Such assumptions are not available in the case of
design models and thus a mechanism for interactively querying the user, in some cases,
to select a specific repair action for violated constraints is needed.
Our work is also related to the approach for repairing inconsistent distributed
and structured documents proposed by Nentwich et al. [22]. Constraints between distributed documents are expressed in xlinkit, a combination of first order logic with
XPath expressions. Their framework is powerful in terms of automatically deriving a
set of repair actions from the constraints. In addition, they use abstract repair actions
as a way to reasonably represent the potentially large number of concrete ways of
resolving constraint violations. However, they consider only a single change, and do
not take into account dependencies among inconsistencies and potential interactions
between repair actions for fixing them. As a result, their approach does not explicitly address the cascading nature of change propagation. Our repair plan generator is,
however, built on top of the intuition and ideas proposed in their approach.
Our cost calculation algorithm can be seen as a form of reasoning about an agent’s
plans, albeit in a special setting. There has been previous work on investigating the
interaction between plans either within a single agent or between different agents in a
multi-agent system (e.g. [67, 68]). There are some similarities between this work and
ours, for example, a plan’s cost can be viewed as its resource consumption and the fact
that fixing one constraint can partially/totally repair other constraints can be seen
as positive interaction between plans. However, there are several major differences
between their work and ours. First of all, the selection between applicable plans is not
controllable. Secondly, the algorithms of [68] rely on a finite plan-goal tree, whereas our
algorithm does not require a complete tree, rather, the search tree is pruned as soon as
cheaper plans are identified. The issue of calculating the cost of a plan or a goal in the
context of existing plans has been previously addressed in [69]. The aim of their work is
to determine whether an agent should adopt a new goal. They estimate the cost (with a
range) rather than calculate the exact cost like our work. In addition, the plans which
68
they consider contain only primitive actions, and they require complete plans. We
also found that it is not easy to adopt their approach to deal with selecting between
alternative plans, as opposed to deciding whether to adopt a goal. Surprisingly, the
specific problem of selecting between applicable plans in BDI agents has not received
much attention. One particular work that tackles this issue is presented in [70]. They
extend AgentSpeak(L) to deal with intention selection in BDI agents. They also use
a lookahead technique to work out the potential cost of a plan and choose the best
plan to execute, and their plan representation is also hierarchical. However, there are
several differences between their work and ours. Firstly, they impose a limit on the
plan-goal tree by giving the depth of the tree as an input to their algorithm. Secondly,
they assume that the environment changes rapidly and expect the worst case scenarios
when looking ahead. In the domain that we are interested in, the environment is static
so we always choose the least cost plans. Finally, they do not consider costs in the
context of existing plans.
Our process for computing cost — performing lookahead over an and-or tree —
clearly resembles a planning problem, and it can be viewed as such, with a few rather
specific requirements. Firstly, because we have repair plans we want to use an HTN
(Hierarchical Task Network) planner. Secondly, we want to collect the set of all best
(cheapest) plans, so we need a planner that supports a notion of plan cost, and is
able to collect all cheapest plans. Finally, because we have a large, potentially infinite,
search space, we want a planner that does pruning and loop detection. Unfortunately,
we do not know of any planner that meets all three requirements. Perhaps the closest
is SHOP2 [71] which is an HTN planner that supports collecting all best plans and
that does branch and bound pruning. However, SHOP2 does not do loop detection,
and although it provides iterative deepening, which can be used to avoid looping, this
does not return the cheapest solution(s), as required. We encoded a UML design48 and
associated constraints and repair plans using SHOP2. Our experiments have shown that
SHOP2 gives the same results as our cost calculation if it terminates, but that it is
susceptible to looping, and that SHOP2 is slightly slower than our Java implementation
(0.172 seconds vs. 0.157 seconds49 ) in this particular application.
Optimal planning is an area clearly related to our work. Optimal planning extends
traditional plans by continuing to search for the optimal plan instead of stopping when
the first plan is found. There has been a range of work which applies decision theory
to classical planning to deal with optimal solutions [72, 73]. They are similar in terms
of integrating a utility model to the plans. One significant trend in extending classical
planning in dealing with optimization is to use heuristics to guide the search [74]. Each
node in the state space is evaluated based on a heuristic function. Heuristic planners
then try to search the state space by looking at the most promising branches first.
Note that calculating heuristic functions can be computationally expensive, often in
proportion to the accuracy of the heuristics.
8 Conclusions and future work
We have presented a novel agent-oriented approach to deal with change propagation
by fixing inconsistencies in the design models. The repair plan generator, an important
48
The video-on-demand system [44], and see http://peace.snu.ac.kr/dhkim/java/MPEG/
On a Windows XP PC with a 1.73Ghz CPU and 1GB RAM, using Java v.1.5.0 06 and
SHOP2 v1.3 running with GNU CLISP v2.3 for Windows.
49
69
component of the framework which represents a new method for automatically generating repair plans from OCL constraints, has been developed. Our repair plan generator
is able to take common OCL constraints as inputs, and perform a translation that has
been proven to be sound and complete [40]. We also raised the issue of having multiple
applicable repair plans and how to select amongst these repair plans. In order to deal
with this problem, we have proposed a cost calculation mechanism for repair plans.
The approach has been implemented as a prototype tool (i.e. the CPA tool) that
assists the designer in propagating changes, and we evaluated the effectiveness of the
approach using the design of a real weather alerting application. Overall, our conclusion
is positive since the evaluation shows that the approach is able to (on average) perform
(approximately) two-thirds of the actions in maintenance plans, across a number of
changes motivated by experience with a real application.
Our repair plan generator has rules that cover a subset of OCL expressions. Although those rules are currently sufficient to deal with consistency constraints defined
in Prometheus, we are aware that other OCL expressions (that we have not covered)
may be needed. For instance, let and if-then-else expressions, the iterate operation,
and collection types other than Sets like Sequences and Ordered Set are not yet addressed. Therefore, potential future work may involve an extension to the repair plan
generator to fully support all forms of OCL expressions.
Our algorithms for calculating repair plan costs require an extensive look-ahead
search which is potentially expensive. However, our efficiency evaluation showed that
(a) our cost calculation algorithm is practical for the weather alerting application; and
(b) that there are reasons to believe that the algorithm does scale to larger models.
Nonetheless, there remains future work that could be done to make the approach more
scalable, and applicable to larger designs. This may include an investigation of the
interaction between constraints in order to limit the number of plans to be explored and
to allow for pruning more quickly. In addition, as discussed in Section 6.3.5, checking for
violated constraints takes up most of the execution time. Therefore, future work may
involve applying more advanced constraint checking techniques to improve the overall
performance. We expect that employing such techniques would make a substantial
difference to the execution time of our cost algorithms.
The evaluation assessing the effectiveness of our CPA tool showed that in some cases
the tool proposed a large number of repair options, which makes it difficult for the user
to decide which one to use. This issue leads to a topic for future work concerning a
method to reduce the number of repair options presented to the user. One approach is
“lazy” selection (or “staging” questions). For example, assume that we need to link a
percept with a role and with an agent, then instead of presenting a set of options, where
each option specifies both a role and an agent (which gives a cross product), specify
first the choice of role, and then, based on that choice, ask for a choice of (relevant)
agent.
We would also like to perform a more extensive study on the applicability of our
approach to other design types such as UML models. This may result in an implementation of a CPA-like tool that can be integrated with existing UML modelling tool (e.g.
ArgoUML50 ). In addition, an interesting topic for future work is to apply our approach
to change propagation between design models and source code. A preliminary investigation (not reported here) on how our approach can be extended to support design
and source code change propagation found that it is feasible, provided that the gap
50
http://argouml.tigris.org
70
between design and code is relatively small. For instance, UML 2.0 (object-oriented)
[32] and CAFnE51 (agent-oriented) [75] have such a potential.
One issue that arises in our approach relates to the use of inconsistency as a driver
for change. As seen in the evaluation, not all changes result in inconsistency, and in
these cases the approach will not be able to completely identify the desired secondary
changes. An opposite issue is that, as argued by [76], not all inconsistency should be
fixed; this is easy to deal with by simply allowing certain constraint types or instances
to be marked as “to be ignored”.
Finally, there are several potential extensions to our evaluation for assessing the
effectiveness of our approach. Firstly, assigning costs to basic repair actions (e.g. creation, connection, disconnection, etc.) is a means for the user to adjust the change
propagation process. The results of our evaluation also indicate several places where
the outcome may be sensitive to the basic costs. Although we have presented some
results regarding the selection of basic costs, we want to perform a more thorough
exploration of the effects of varying the basic costs. In addition, we would also like
to explore how to extend the cost function to account for the amount of user input
required by a given repair option. Moreover, we have used a cost algorithm to estimate
and drive the inconsistency repair process in such a way that we assume the cheapest
repair plans are preferable from the perspectives of the user. Although in the case
study reported in Section 6 the cheapest solution also turns out to be the one that is
intuitively the most appropriate, the cheapest cost heuristic may not always lead to the
best way to resolve inconsistencies. As part of future work, we would like to perform
more detailed user experiments to have more understanding on this issue.
The case study that we have used represents only a single medium-size design
model. Therefore, an important part of our future work involves evaluating our approach with a range of different realistic applications and change scenarios where design
models can grow to thousands of model elements. Furthermore, we want to improve the
usability of the CPA tool and perform an evaluation study which involves real software
designers using our tool. This would give us much more information on how helpful the
tool is in practice. Since the tool is still a prototype that is not ready for industry use,
one area of future work is to develop an industry-grade tool. Nonetheless, although
the results obtained from the evaluation we conducted are relatively preliminary and
necessarily limited, they are quite encouraging and serve as concrete indication that
the approach developed is promising.
Acknowledgements We would like to thank Lin Padgham for her comments on this work.
References
1. Vliet, H.V.: Software engineering: principles and practice, 2nd edn. John Wiley & Sons
Inc. (2001)
2. Bennett, K.H., Rajlich, V.T.: Software maintenance and evolution: a roadmap. In:
A. Finkelstein (ed.) The Future of Software Engineering, pp. 73–87. ACM Press, Limerick, Ireland (2000)
3. Rajlich, V.: Changing the paradigm of software engineering. Communications of the ACM
49(8), 67–70 (2006)
51 CAFnE is an extension of the Prometheus Design Tool that allows more design details to
be specified and produces fully executable code.
71
4. Jennings, N.R., Wooldridge, M.: Agent-Oriented Software Engineering. In: F.J. Garijo,
M. Boman (eds.) Proceedings of the 9th European Workshop on Modelling Autonomous
Agents in a Multi-Agent World : Multi-Agent System Engineering (MAAMAW-99), vol.
1647, pp. 1–7. Springer-Verlag: Heidelberg, Germany (1999)
5. Bergenti, F., Gleizes, M.P., Zambonelli, F. (eds.): Methodologies and Software Engineering for Agent Systems. The Agent-Oriented Software Engineering Handbook. Kluwer
Publishing (2004)
6. Henderson-Sellers, B., Giorgini, P. (eds.): Agent-Oriented Methodologies. Idea Group
Publishing (2005)
7. Dam, K.H., Winikoff, M.: Comparing agent-oriented methodologies. In: P. Giorgini,
B. Henderson-Sellers, M. Winikoff (eds.) Agent-Oriented Information Systems, Lecture
Notes in Computer Science, vol. 3030, pp. 78–93. Springer (2003)
8. Tran, Q.N.N., Low, G.C.: Comparison of ten agent-oriented methodologies.
In:
B. Henderson-Sellers, P. Giorgini (eds.) Agent-Oriented Methodologies, chap. XII, pp.
341–367. Idea Group Publishing (2005)
9. Buckley, J., Mens, T., Zenger, M., Rashid, A., Kniesel, G.: Towards a taxonomy of software
change. Journal on Software Maintenance and Evolution: Research and Practice 17(5),
309–332 (2005)
10. Rajlich, V.: A model for change propagation based on graph rewriting. In: Proceedings
of the International Conference on Software Maintenance (ICSM), pp. 84–91. IEEE Computer Society (1997)
11. Yau, S., Collofello, J., MacGregor, T.: Ripple effect analysis of software maintenance. IEEE
Computer Software and Applications Conference (COMPSAC ’78) pp. 60–65 (1978)
12. Luqi: A graph model for software evolution. IEEE Transactions on Software Engineering
16(8), 917–927 (1990)
13. Rajlich, V.: A methodology for incremental changes. In: Proceedings of the 2nd International Conference on eXtreme Programming and Flexible Process in Software Engineering,
pp. 10–13. Cagliary, Italy (2001)
14. Gwizdala, S., Jiang, Y., Rajlich, V.: JTracker - a tool for change propagation in Java. In:
7th European Conference on Software Maintenance and Reengineering (CSMR 2003), pp.
223–229. IEEE Computer Society (2003). Benevento, Italy
15. Hassan, A.E., Holt, R.C.: Predicting change propagation in software systems. In: ICSM
’04: Proceedings of the 20th IEEE International Conference on Software Maintenance, pp.
284–293. IEEE Computer Society, Washington, DC, USA (2004)
16. Kleppe, A.G., Warmer, J.B., Bast, W.: MDA explained: The Model Driven Architecture:
Practice and Promise. Addison-Wesley, Boston, MA (2003)
17. Mellor, S.J., Clark, A.N., Futagami, T.: Guest editors’ introduction: Model-driven development. IEEE Software 20(5), 14–18 (2003)
18. Schmidt, D.C.: Guest editor’s introduction: Model-driven engineering. Computer 39(2),
25–31 (2006)
19. van Deursen, A., Visser, E., Warmer, J.: Model-driven software evolution: A research
agenda. In: D. Tamzalit (ed.) Proceedings 1st International Workshop on Model-Driven
Software Evolution (MoDSE), pp. 41–49. University of Nantes (2007)
20. Mens, T.: Introduction and roadmap: History and challenges of software evolution. In:
T. Mens, S. Demeyer (eds.) Software Evolution. Springer Berlin Heidelberg (2008)
21. Bratman, M.E.: Intentions, Plans, and Practical Reason. Harvard University Press, Cambridge, MA (1987)
22. Nentwich, C., Emmerich, W., Finkelstein, A.: Consistency management with repair actions. In: ICSE ’03: Proceedings of the 25th International Conference on Software Engineering, pp. 455–464. IEEE Computer Society (2003)
23. Egyed, A.: Fixing inconsistencies in UML models. In: Proceedings of the 29th International
Conference on Software Engineering (ICSE ’07), pp. 292–301. IEEE Computer Society,
Washington, DC, USA (2007)
24. Object Management Group: Unified Modeling Language (UML) 1.4.2 specification
(ISO/IEC 19501). http://www.omg.org/technology/uml/ (2005)
25. Object Management Group: Object Constraint Language (OCL) 2.0 Specification. http:
//www.omg.org/docs/ptc/03-10-14.pdf (2006)
26. Padgham, L., Thangarajah, J., Winikoff, M.: Tool support for agent development using
the Prometheus methodology. In: First international workshop on Integration of Software
Engineering and Agent Technology (ISEAT 2005). Melbourne, Australia (2005)
72
27. Dam, H.K., Winikoff, M.: Supporting change propagation in UML models. In: Proceedings
of the 26th IEEE International Conference on Software Maintenance (ICSM 2010) (2010)
28. Dam, H.K., Le, L.S., Ghose, A.: Supporting change propagation in the evolution of enterprise architectures. In: Proceedings of the 14th IEEE International Enterprise Distributed
Object Computing Conference (EDOC) (2010)
29. Mathieson, I., Dance, S., Padgham, L., Gorman, M., Winikoff, M.: An open meteorological
alerting system: Issues and solutions. In: V. Estivill-Castro (ed.) Proceedings of the 27th
Australasian Computer Science Conference, pp. 351–358. Dunedin, New Zealand (2004)
30. Padgham, L., Winikoff, M.: Developing intelligent agent systems: A practical guide. John
Wiley & Sons, Chichester (2004). ISBN 0-470-86120-7
31. Jayatilleke, G.B.: A model driven component agent framework for domain experts. PhD
thesis, RMIT University, Australia (2007)
32. Object Management Group: UML 2.0 Superstructure and Infrastructure Specifications.
http://www.omg.org/technology/uml/ (2004)
33. Atkinson, C., Kühne, T.: Model-driven development: A metamodeling foundation. IEEE
Software 20(5), 36–41 (2003)
34. Kruchten, P.: The 4+1 view model of architecture. IEEE Software 12(6), 42–50 (1995)
35. Spanoudakis, G., Zisman, A.: Inconsistency management in software engineering: Survey
and open research issues. In: K.S. Chang (ed.) Handbook of Software Engineering and
Knowledge Engineering, pp. 24–29. World Scientific (2001)
36. Gamma, E., Helm, R., Johnson, R., Vlissides, J.: Design Patterns. Addison-Wesley Professional (1995)
37. Rao, A.S.: AgentSpeak(L): BDI agents speak out in a logical computable language. In:
MAAMAW ’96: Proceedings of the 7th European workshop on Modelling autonomous
agents in a multi-agent world : agents breaking away, pp. 42–55. Springer-Verlag (1996)
38. Padgham, L., Winikoff, M., Poutakidis, D.: Adding debugging support to the Prometheus
methodology. Engineering Applications of Artificial Intelligence, special issue on Agentoriented Software Development, 18(2), 173–190 (2005)
39. Poutakidis, D., Padgham, L., Winikoff, M.: Debugging multi-agent systems using design
artifacts: the case of interaction protocols. In: AAMAS ’02: Proceedings of the first international joint conference on Autonomous agents and multiagent systems, pp. 960–967.
ACM, New York, NY, USA (2002). DOI http://doi.acm.org/10.1145/544862.544966
40. Dam, K.H.: Support software evolution in agent systems. PhD thesis, RMIT University,
Australia (2008)
41. Nentwich, C., Capra, L., Emmerich, W., Finkelstein, A.: xlinkit: a consistency checking and
smart link generation service. ACM Transactions on Internet Technology 2(2), 151–185
(2002)
42. Egyed, A., Wile, D.S.: Support for managing design-time decisions. IEEE Transactions
on Software Engineering 32(5), 299–314 (2006)
43. Levenshtein, V.I.: Binary codes capable of correcting deletions, insertions, and reversals.
Soviet Physics Doklady 10, 707–710 (1966)
44. Egyed, A.: Instant consistency checking for the UML. In: Proceedings of the 28th International Conference on Software Engineering (ICSE ’06), pp. 381–390. ACM, New York,
NY, USA (2006)
45. Dam, K.H., Winikoff, M., Padgham, L.: An agent-oriented approach to change propagation
in software evolution. In: Proceedings of the Australian Software Engineering Conference
(ASWEC), pp. 309–318. IEEE Computer Society (2006)
46. Chapin, N., Hale, J.E., Kham, K.M., Ramil, J.F., Tan, W.G.: Types of software evolution
and software maintenance. Journal of Software Maintenance 13(1), 3–30 (2001)
47. Opdyke, W.F.: Refactoring: A program restructuring aid in designing object-oriented application frameworks. Ph.D. thesis, University of Illinois at Urbana-Champaign (1992)
48. Fowler, M., Beck, K.: Refactoring: improving the design of existing code. Addison-Wesley
Longman Publishing Co., Inc., Boston, MA, USA (1999)
49. Arnold, R., Bohner, S.: Software Change Impact Analysis. IEEE Computer Society Press
(1996)
50. Padgham, L., Winikoff, M., DeLoach, S., Cossentino, M.: A unified graphical notation
for AOSE. In: M. Luck, J.J. Gomez-Sanz (eds.) Proceedings of the Ninth International
Workshop on Agent Oriented Software Engineering, pp. 61–72. Estoril, Portugal (2008)
51. Finkelstein, A.C.W., Gabbay, D., Hunter, A., Kramer, J., Nuseibeh, B.: Inconsistency
handling in multiperspective specifications. IEEE Transactions on Software Engineering
20(8), 569–578 (1994)
73
52. Elaasar, M., Briand, L.: An overview of UML consistency management. Technical Report
SCE-04-18, Carleton University, Department of Systems and Computer Engineering (2004)
53. Kuzniarz, L., Reggio, G., Sourrouille, J.L., Huzar, Z. (eds.): UML 2003, Modeling Languages and Applications. Workshop on Consistency Problems in UML-based Software
Development II, 2003:06. Blekinge Institute of Technology, Ronneby (2003)
54. Bodeveix, J.P., Millan, T., Percebois, C., Camus, C.L., Bazex, P., Feraud, L.: Extending
OCL for verifying UML models consistency. In: Kuzniarz et al. [77], pp. 75–90
55. Engels, G., Kuster, J.M., Heckel, R., Groenewegen, L.: Towards consistency-preserving
model evolution. In: Proceedings of the International Workshop on Principles of Software
Evolution (IWPSE), pp. 129–132. ACM Press (2002)
56. Van Der Straeten, R., Mens, T., Simmonds, J., Jonckers, V.: Using description logics to
maintain consistency between UML models. In: P. Stevens, J. Whittle, G. Booch (eds.)
UML 2003 - The Unified Modeling Language, pp. 326–340. Springer-Verlag (2003). LNCS
2863
57. Mens, T., Straeten, R.V.D., Simmonds, J.: A framework for managing consistency of
evolving UML models. In: H. Yang (ed.) Software Evolution with UML and XML, pp.
1–31. Idea Group Publishing (2005)
58. Grundy, J., Hosking, J., Mugridge, W.B.: Inconsistency management for multiple-view
software development environments. IEEE Transactions on Software Engineering 24(11),
960–981 (1998)
59. Liu, W., Easterbrook, S., Mylopoulos, J.: Rule based detection of inconsistency in UML
models. In: Kuzniarz et al. [77], pp. 106–123
60. Briand, L.C., Labiche, Y., O’Sullivan, L., Sowka, M.M.: Automated impact analysis of
UML models. Journal of Systems and Software 79(3), 339–352 (2006)
61. Mens, T., Van Der Straeten, R., D’Hondt, M.: Detecting and resolving model inconsistencies using transformation dependency analysis. In: O. Nierstrasz, J. Whittle, D. Harel,
G. Reggio (eds.) Model Driven Engineering Languages and Systems, vol. 4199, pp. 200–
214. Springer-Verlag (2006)
62. Egyed, A., Letier, E., Finkelstein, A.: Generating and evaluating choices for fixing inconsistencies in UML design models. In: Proceedings of the 23rd IEEE/ACM International
Conference on Automated Software Engineering (ASE ’08), pp. 99–108. IEEE Computer
Society, Washington, DC, USA (2008)
63. Urban, S., Karadimce, A., Nannapaneni, R.: The implementation and evaluation of integrity maintenance rules in an object-oriented database. Proceedings of the Eighth International Conference on Data Engineering pp. 565–572 (1992)
64. Ceri, S., Fraternali, P., Paraboschi, S., Tanca, L.: Automatic generation of production rules
for integrity maintenance. ACM Transaction Database Systems 19(3), 367–422 (1994)
65. Gertz, M., Lipeck, U.W.: An extensible framework for repairing constraint violations. In:
Proceedings of the IFIP TC11 Working Group 11.5, First Working Conference on Integrity
and Internal Control in Information Systems, pp. 89–111. Chapman & Hall, Ltd. (1997)
66. Moerkotte, G., Lockemann, P.C.: Reactive consistency control in deductive databases.
ACM Transaction Database Systems 16(4), 670–702 (1991)
67. Clement, B.J., Durfee, E.H.: Top-down search for coordinating the hierarchical plans of
multiple agents. In: AGENTS ’99: Proceedings of the third annual conference on Autonomous Agents, pp. 252–259. ACM Press (1999)
68. Thangarajah, J., Winikoff, M., Padgham, L., Fischer, K.: Avoiding resource conflicts in intelligent agents. In: Proceedings of the 15th European Conference on Artificial Intelligence,
ECAI’2002, pp. 18–22. IOS Press (2002)
69. Horty, J.F., Pollack, M.E.: Evaluating new options in the context of existing plans. Artificial Intelligence 127(2), 199–220 (2001)
70. Dasgupta, A., Ghose, A.K.: CASO: a framework for dealing with objectives in a constraintbased extension to AgentSpeak(L). In: Twenty-Ninth Australasian Computer Science
Conference (ACSC 2006), pp. 121–126. Australian Computer Society, Inc. (2006)
71. Nau, D.S., Au, T.C., Ilghami, O., Kuter, U., Murdock, J.W., Wu, D., Yaman, F.: SHOP2:
An HTN planning system. Journal of Artificial Intelligence Research (JAIR) 20, 379–404
(2003)
72. Blythe, J.: Decision-theoretic planning. AI Magazine 20(2), 37–54 (1999)
73. Williamson, M.: Optimal planning with a goal-directed utility model. In: K.J. Hammond
(ed.) Proceedings of the Second International Conference on AI Planning Systems, pp.
176–181. AAAI (1994)
74
74. Ephrati, E., Pollack, M.E., Milshtein, M.: A cost-directed planner: Preliminary report.
In: Proceedings of the 14th National Conference on Artificial Intelligence (AAAI), pp.
1223–1228 (1996)
75. Jayatilleke, G., Padgham, L., Winikoff, M.: Component agent framework for non-experts
(CAFnE) toolkit. In: R. Unland, M. Calisti, M. Klusch (eds.) Software Agent-Based Applications, Platforms and Development Kits, Whitestein Series in Software Agent Technologies and Autonomic Computing, pp. 169–195. Birkhauser Basel (2005)
76. Fickas, S., Feather, M., Kramer, J. (eds.): Proceedings of the Workshop on Living with
Inconsistency. Boston, USA (1997)
77. Kuzniarz, L., Reggio, G., Sourrouille, J.L., Huzar, Z. (eds.): UML 2002, Model Engineering, Concepts and Tools. Workshop on Consistency Problems in UML-based Software
Development, 2002:06. Blekinge Institute of Technology, Ronneby (2002)