Utility Functions in Autonomic Systems: a State of the Art

MSC SEMINAR - PERVASIVE AND ARTIFICIAL INTELLIGENCE RESEARCH GROUP, DEPARTMENT OF INFORMATICS, UNIVERSITY OF FRIBOURG. JUNE 20111
Utility Functions in Autonomic Systems: a State of
the Art
Alistair Doswald Pervasive and Artificial Intelligence Research Group
Department of Informatics
University of Fribourg
Email: [email protected]
Abstract—The purpose of this paper is to present a state of
the art for autonomic systems, highlighting the use of utility
functions in autonomic systems as part of the analysis/decision
component of the autonomic loop. In particular, the usefulness
of utility functions when compared to other analysis strategies is
explored. In order to offer suitable perspective, this paper also
presents other mechanics that have recently been researched to
solve the analysis/decision needs of autonomic systems.
•
•
Keywords: Artificial Intelligence, Autonomic systems, Utility Functions
I. I NTRODUCTION
The premise of autonomic computing, as proposed in 2001
by IBM [6], is to develop systems that are largely selfsufficient, requiring only high-level instructions to perform
their task. The need for such a system is attributed to the everincreasing complexity in computing systems, requiring ever
more personnel to deal with the complexity, and ultimately
resulting in a bottleneck restricting the use of even more
sophisticated systems [5]. The complexity here is not derived
from the difficulty of a single component, but rather from the
interactions and states of a very large number of components,
which traditionally would either be managed by human operators, or could even be unmanageable. The proposed solution
to this problem is bio-inspired, the concept borrowed from the
autonomic nervous system. A definition from [12] is:
The autonomic nervous system (ANS or visceral
nervous system) is the part of the peripheral nervous
system that acts as a control system functioning
largely below the level of consciousness, and controls visceral functions.
The word autonomic itself comes from ancient Greek and
means self-governing. The concept of delegating low- or midlevel decisions to the system and the notion of self- are
the basis on which autonomic computing is formed. A fully
autonomic system must have the following properties [6]:
• Self-configuration: done according to high-level goals,
the user specifies what he needs, and the system automatically sets its configuration parameters to cater to those
needs.
• Self-optimisation: optimal use of resources. When tradeoffs are involved, once again, the system optimises
to meet high level requirements. In many cases, selfoptimisation can simply be seen as dynamic selfconfiguration.
Self-healing: the autonomic system detects and repairs
errors. This can be real healing, for example repairing
corrupted software, or management, such as rerouting
around or sealing off problematic components. What is
important is that the system remains reasonably tolerant
to any error that may occur, and reactive to these errors.
Self-protection: the system protects itself not only from
malicious attacks, but also from critical errors by users.
The system reacts to attacks, but also manages its configuration to protect against them in the first place.
We will see, however, that autonomic systems will often only
implement one or two of the self- properties.
On a more practical side, the generally accepted method
to do this is to have an autonomic manager interact with a
managed element; the managed element being a component
necessary to achieve a business objective (e.g. a webserver).
The task of the autonomic manager is to continuously monitor
the managed element, note important events, decide on a
course of action, and then effect the changes on the managed
element. This method of interacting with the managed element
is known as the autonomic control loop. The design of this
loop is free, but the reference design is knows as the MAPEK loop, which stands for:
•
•
•
•
•
Monitor the managed element, through sensors. Sensors
can monitor the physical environment (e.g. CPU temperature) or software elements (e.g. log files, probes in code).
Analyse the monitored information for relevant states and
events.
Plan the action the system must take from the analysed
data.
Execute the planned actions and changes in the managed
elements, through effectors.
Knowledge: a gathering of information which can be
consulted at each step of the control loop to provide
context for the operation. This information could be input
by a human operator, or it could be learned information
gathered during the autonomics systems operation time.
Although more precise than the concept of self-, MAPE-K
remains relatively high level description of the problem. Even
though monitoring and execution can be complex and are the
subject of research, the difficulty of autonomic computing lies
in the analysis and planning phases - and to some extent
the knowledge - which require the greatest use of artificial
intelligence. One method to handle these phases is through
the application of policy. In this case policy is defined as a
method to translate business requirements into actions of the
system. The simplest of these is known as Event-ConditionAction (ECA) policy, which sets a set of rules to the system
: if event occurs and condition is satisfied then do action.
This system has two obvious failings: the first is that a very
knowledgeable human administrator is required to translate
business objectives into the set of rules and the second, more
problematic, is that the set of rules can grow to be very
complex, which will lead to errors and unforeseen behaviours.
Beyond ECA we have goal policies: the user specifies a set
of goals, and a dissociated process in the system makes sure
that the necessary steps to fulfil the goals are met. However,
in the case of multiple goals and policies, there is no way to
handle conflict or interaction between policies. This problem
can be solved by using utility based policy.
Utility is a measure of relative satisfaction, which can
directly express the business value of a state of the system.
With utility-based policy, we simply maximise the total utility
of a system to ensure that the business needs are met to
the maximal ability of the system. Concurrent policies are
no longer a problem as their interactions can be modelled
in the utility function. However, by doing this, we shift the
problem to the elaboration of a utility function that adequately
represents the system. In section II we shall see the manners
in which this may be dealt with and the properties of utility,
while in section III we will see the general design of the
autonomic systems using utility functions. In section IV we
will see another possible direction using adaptive policy for
comparison purposes, while in section V we will compare the
results of utility-based autonomic systems to those of more
traditional management methods. The conclusion to this work
will then be presented in section VI.
•
•
•
Utility functions have already been extensively used in
AI, most notably for intelligent agents which have a lot
in common with autonomic systems [1].
Utility functions allow for (and indeed require) a separation between the analysis of the data, and the planning
and execution mechanisms of the autonomic control
group, with the latter two handled by an appropriate
optimisation algorithm [6]. This allows for flexibility,
especially when contrasted to ECA policy. As an added
bonus, advances in optimisation algorithms will also
benefit autonomic systems that use utility functions.
Utility functions can serve as a very high level specification of the behaviour of the system. This allows business
objectives to be directly translated into service level
objectives when used with an appropriate optimisation
and modelling algorithm [11].
Moreover, research into utility functions continues. In [1],
for example, they define a new notion of conditional utility
and use it to define utility difference networks. The goal of
conditional utility is to satisfy additive analogues of the chain
rule and Bayes rule, while the utility difference network is similar to Baysian networks. Not only does this research provide
additional descriptive power brought by these concepts, but
the notion of utility difference networks allows for a structured
elicitation of a utility function with a simple algorithm for its
construction.
Despite their clear advantages, utility functions suffer from
the difficulty of creating the utility values and functions
for large and complex systems [11]. Indeed, [6] notes that
utility functions are “extremely hard to define, as every aspect
that influences the decision by the utility function must be
quantified”. However, constructing an exact utility function is
vital since we will pair the utility function with a optimisation
algorithm, and that algorithm will try to maximise the given
utility function, not the real utility of the system. Aside from
the obvious use of human expertise, we may be able to use
mathematical tools or we can turn to algorithmic methods
to elicit the utility functions. In [3], for example, they propose a method to automatically generate a mapping between
application-level characteristics, which provide utility, and
environment-level ones, on which the application-level characteristics depend. Measures from both type of characteristics
provide multi-dimensional graphs; various statistical moments
are calculated from the environment graph, and the best curvefit between these moments and the corresponding points in
the application-level graph provides the utility function. This
method has several advantages: it is dissociated from a specific
domain as it requires only data, the collection of data can
be automated depending on the situation, and once the utility
function is developed, it is directly mapped to monitorable
values. An interesting observation is that the researchers found
that it is best to have a utility function per task. Two examples
are proposed:
II. T HE CASE FOR UTILITY FUNCTIONS
Utility functions, and the notion of utility, initially came
from the field of economics. The idea is that we can express
relative satisfaction through utility, so while it cannot give any
information on the intrinsic value of one object, it can compare
its value to a second one. Of course, in economics utility is
commonly linked to money which gives a simple baseline
on which to define utility. However, although sometimes the
utility of a computing system can be directly linked to the
financial gain it provides, in many situations this will not be
the case, requiring a more abstract definition of utility (e.g.
user-perceived utility). More formally, given an event with
possible outcomes A or B, we can say that we prefer A to
B (A B) or that we are indifferent between A and B (A
˜B). This notion of preference has the following attributes:
Orderability, Transitivity, Continuity, Substitutability, Monotonicity, Decomposability. If these principals are upheld, then
there exists a utility function U : X → < (where X is a
consumption set) such that U (A) > U (B) ⇔ A B and
U (A) = U (B) ⇔ A ˜B. Using utility functions for autonomic
computing has several advantages:
• Utility already has an existing formal mathematical body
capable of expressing notions such as risk-aversion.
•
2
A VoIP application used to carry out various tasks by a
user: usability of the application was noted by the user,
providing the application-level characteristics, while the
environment characteristics (bandwidth, latency, packet
loss) were varied during the experiment.
An FTP service, where the utility is equated to throughput
which is decomposable on the application level characteristics like protocol handshaking, packet loss and
retransmission, ... . Environment level factors are the raw
characteristics of the network: raw throughput, latency,
fluctuations, etc... In this case the utility function becomes
a weighed sum of the statistical moments of packet interarrival time (Xi ) and socket throughput (Yi ), such as
U = a1 X1 + a2 X2 + a3 X3 + a4 X4 + a5 X5 + b1 Y1 +
b2 Y2 + b3 Y3 + b4 Y4 + b5 Y5 . The weights ai and bi are
provided by curve fitting methods.
In [13] a different method is proposed to automate the creation
of utility functions: evolve them through genetic programming.
In this case the setting is the self-healing aspect of autonomic
system, and more precisely the manner in which anomalous
behaviour can be detected. To construct the utility function, the
genome consists of a predicate grammar. The training of the
genome involves using two test suites, one clean, and the other
infected with a malware. These networks provide metrics from
various measured outputs such as CPU time or heap memory.
The details of the setup can be seen in Fig. 1. The aim of the
autonomic agents, or the partition of APE functionality within
autonomic systems into modules. The reference paper for
utility in autonomic system [11] proposes one such system:
a data centre in which resources are shared between different
applications. The given solution is a two-level system composed at the lower level of autonomic application environments
capable of managing their own behaviour and their relationship
with other autonomic elements. They are capable of locally
optimising the resources allocated to them through the utility
function Ui (Si , Di ). Fig.2 shows the architecture of one such
application environment. We can partition the elements in this
schema according to the APE model: The utility calculator
and service level utility function U(S,D) provide the analysis,
the Modeler and system performance model S(C,R,D) provide
elements for planning the action to take, while the controller
uses the input from both to execute the appropriate actions on
the managed system. At the higher-level the resource arbiter
•
Fig. 1.
Architecture used to evolve utility functions
evolved utility function here is to detect when the situation
is anomalous, drawing from the entry metrics. The fitness of
a given utility function is in how effective it is maximising
the number of malicious activity correctly identified, while
minimising the number of false positives and false negatives.
In section V we will see that this approach is successful, but in
our opinion there is one drawback to the provided example: it
is not quite a utility function in one regard. Being constructed
from a predicate grammar, the utility function can only provide
boolean values, yet one of the main attractions of utility is
to provide comparable values for optimisation. As such, even
though proof of concept in generating a function is provided,
extra work should be done so that the function can quantify
the severity of the problem.
Fig. 2. modules and dataflow in an application manager. S: service level, D:
demand, D’ : predicted demand, C: control parameters: Rt: current resource
level
calculates the optimal resource allocation R* that maximises
the global utility, taking as input the utility functions Ui (Ri )
provided by the application managers. These functions are
either updated when an application manager has a need for
more resources, or upon a request made by the resource arbiter.
The optimisation problem is typically a NP-hard resource
allocation problem, and will be solved (or estimated) by an
appropriate algorithm.
A similar two tier utility function architecture can be also
be found in [4] in the context of autonomic load balancers
(LB) for multi-tiered web sites. In this case we have a
higher-level LB that receives requests from the Internet and
sends them to one of N web servers. At the lower level,
an application LB decides to which cluster a request will
be routed. Unfortunately, [4] doesn’t give any details on the
web-server LB or its interaction with the application-level
LB, concentrating only on the latter. The application LB
III. D ESIGN OF UTILITY BASED AUTONOMIC SYSTEMS
Now that we’ve explored the concept and advantages of
utility functions, we will consider how they fit in the general
design of the systems. So far, there is no consensus on how
the design of the Analysis, Planning and Execution (APE) part
of the control loop should be done. However, some general
design trends can be observed, for example the use of multiple
3
characterises utility according to response time and query
throughput, and modifies the system through two policies:
an f-policy which involves deciding to which server cluster a
request should be re-directed and an s-policy which determines
how many servers should be allocated to each cluster. The
f-policy is re-evaluated at an interval of 30 seconds, while
the s-policy is re-evaluated every 300 seconds due to the cost
associated with moving a server from one cluster to another.
The state-space S of all possible solutions is typically very
large, and the utility function to maximise is non-linear and
does not have a closed form expression, which means that
the optimisation used is necessarily a heuristic search. The
architecture of the autonomic LB is shown in Fig.3, and once
more we can decompose it along the lines of the APE model:
analysis is done by the performance model solver (translates
the system state and workload intensity into expected response
time and query throughput) and the utility function, planning
is done through the heuristic search, and the autonomic agent
executes the modifications on the controlled system.
Fig. 3.
Fig. 4.
Layers of the system providing quality of context
this case there is no higher-level autonomic system to provide
management to the agents; rather, the cooperation is designed
to be emergent. Each autonomic context service attempts
to maximise its utility, where utility is defined as the QoC
necessary to satisfy a request made by an application. More
precisely, the utility function calculates the distance between
the expected QoC and the provided QoC, and the system seeks
to minimise this value. The autonomic manager then executes
the modifications by subscribing to or unsubscribing from
the various context providers. This system is then extended
to self-healing when the service providers become unable to
provide context and the system reroutes to another, and selfconfiguring when the system automatically upgrades to use
newly installed service providers or sensors.
All examples up until now have been mostly concerned with
resource optimisation, and though this seems to be the main
topic of research into autonomic systems at the time, there
are also other self- functionalities that make use of distributed
autonomic agents. In [2] the addressed problem is the selfdeployment of applications within a distributed environment,
which falls under the topic of self-configuration. The idea is
that a network is split into autonomic-enabled nodes, with each
node having a certain amount of resources, and a child list of
nodes to which it has a direct connection. Applications that run
on this network can be distributed according to an application
graph. When a user starts an application, the node becomes the
root node for the application, and decides which components
to deploy locally, while the other components are delegated
to the children nodes that provide the highest utility. Those
nodes will then spread the computation in this manner. There
is no higher-level autonomic manager: as in [7], overall utility
is intended to be emergent. The utility function in this context
was designed to respect the following policies: nodes that have
a higher degree of connectivity are preferred, as are faster and
less busy nodes, and nodes with faster communication links.
Preference is also given to high-priority jobs. Under these
preferences, the problem of self-configuration for an individual
application becomes finding the highest-level utility mapping
between the edges E in the application graph and the links L in
the network. However, since the system uses only local utility
values at each step, this mapping will never be maximised. The
system can however perform a limited self-optimisation by
triggering a reconfiguration if a parent node of an application
perceives that its utility has dropped below a certain threshold.
It is not always the case that multiple autonomous agents
Architecture of an autonomic load balancer
The works in [4] and [11] are quite similar to each other,
but in some ways their physical architecture and purpose are
also quite similar. This is not necessarily the case: [7] presents
the application of a resource management autonomic system
to an ubiquitous computing system, or more precisely a smart
home. In this example, the resources that are managed are the
sensors present in the house, and they are exploitated by the
applications provided by the smart house. The general design
of the system is presented in Fig.4. For the application level
they discuss the notion of Quality of Context (QoC), where
the context is the detail provided to an application by the
sensors. Depending on the application, the requirements from
various sensors are different: for example to locate a lost key,
high precision position is important, but refresh rate not so
much. Autonomicity is provided in this system at the level of
the context services, leading once again to many autonomic
agents to manage the resources of the system. However, in
4
will be used in a distributed context. In [8] for example, an
autonomic workload manager optimises the use of resources
in the context of cloud computing, with the high-level architecture presented in Fig.5 The role of the autonomic manager
IV. B IO - INSPIRED ALTERNATIVES
Using utility functions provides systems that are simple in
their concept, while remaining flexible as long as the paired
optimisation algorithm is well chosen. However, it is not the
only method that shares those characteristics. Bio-inspired
algorithms, for example, have proven to provide an efficient
exploration of a problem-space, and can also be defined in a
simple generic manner.
The goal of the autonomic system presented in [9] is to use
genetic algorithms for self-configuration purposes, but aims
to obtain an optimal configuration. The problem proposed
as a case study is the diffusing of data to a set of remote
mirrors across dynamic and unreliable networks, and to do
so the autonomic system will compute an overlay tree of the
mirrors along network connections. This configuration will
then be sent to the mirrors telling them to which mirrors
they will communicate. Reconfiguration will be triggered by
changing network conditions. The genotype for this situation is
relatively simple: a topology vector will give the existing links
of the underlying network, and each individual will consist of
a vector of 1s and 0s indicating if the corresponding link in
the topology vector is active or not. This system is sufficiently
simple to be reusable, and could be used as-is for the problem
we’ve already seen in [2]. The design of the genetic algorithm
itself is relatively unimportant for the problem, but choosing
the fitness is less trivial. In this case there are three metrics
considered: cost, performance, and reliability. Each of these
metrics has an associated fitness sub-function: Fc for cost,
Fe1 and Fe2 for performance, and Fr1 and Fr2 for reliability.
These are linearly combined into a general fitness function
F F = a1 Fc + a2 (Fe1 + Fe2 ) + a3 (Fr1 + Fr2 ), where the
weights ai denote user preference. Running the algorithm
until stabilisation or for a limited number of generations will
provide the configuration for the system.
While the setting for the autonomic system in [9] was close
to the one discussed in [2], the setting for the reinforcement learning-based algorithm discussed in [10] is exactly
the same as the one described in [11], being done by the
same research lab. This time, however, they use reinforcement
leaning (RL) instead of utility functions. Rather than using
a pure RL system, the paper considers a hybrid RL: a RL
system observes the existing autonomic systems (in this case
the application mangers and resource arbiter), and uses the
(T+1) first recorded observations for training, according to
the supervised-learning inspired algorithm presented in Fig.6.
This approach to generate a policy system has a couple of
advantages. It’s generic, requiring little to no knowledge of
the system to be applied, and more importantly it’s efficient :
compared to several queuing policies, the RL system scored
10 to 30% better.
Aside from the interest of the research in [9] and [10], we
can consider that these systems are not mutually exclusive
to the use of utility functions. Indeed, when considering [9],
it is clear that the system could become utility-based simply
by replacing the fitness function by one that considers utility
(and they are already quite close). As for [10], by its nature it
cannot function in tandem with a utility function. However it
Fig. 5. High-level architecture of an autonomic workload mapper for cloud
computing
here is to adaptively assign incoming tasks to different sites.
The manner in which this is done depends on the task to
achieve: for workflow execution utility is specified according
to response time and profit, while query workload execution
will be defined in terms of response time and quality of
service (QoS). In the case of workflow execution, the system
must determine how best to map workflows to the resources
provided by the cloud, while for query workload execution the
system must determine the level of resources to allocate to the
queries for execution.
We’ve seen that in many cases there are similarities in
the general architecture of autonomic systems using utility
functions, but are there other common themes? The answer
would be not many. One recurring design in self-optimising
systems is the use of priority classes to simplify the calculation
of utility of incoming tasks. In [4] for example, these would
be the value of the customers that use the website, with
better value customers (i.e. those that are more likely to spend
money) having a higher class. These classes are often defined
in terms of platinum, gold, silver and bronze [6][4][11].
At the moment there seems to be no clear general methodology or best practices to design an autonomic system using utility functions. In some cases such as [8], they propose a design
methodology, but it remains very high level: utility-property
selection, utility-function definition, cost model development,
representation design, optimisation algorithm selection and
finally control loop implementation. Before such a thing as
best practice for the design of utility-based autonomic systems
is created, using utility functions for policy would have to
become the standard, and likely widely implemented. For now
however, this is not the case, and in the next section we will
see some alternative methods to handle the APE part of the
control loop.
5
quite well, better in fact then a sub-optimal definition of
utility. However, as QoS tolerance lowers, well defined utility
functions outperforms ECA policy, making it possible to meet
QoS goals even with tough QoS tolerance. This is due to the
flexibility brought by the utility function, which allows the
optimisation algorithm to find that it is better to drop some
queries in order to obtain better utility.
Fig. 6.
The algorithm used for hybrid RL training
could take over from a utility based-system after having used
it for a learning phase, but it would first have to be shown
that such a system could provide better results than an already
sophisticated utility-based autonomic system.
V. R ESULTS AND COMPARISONS
In the previous section we saw how solutions using utility
functions have been developed to solve the analysis, planning
and execution phase in autonomic computing. In this section,
we will present the tangible results of those solutions. However, before seeing how utility functions compare to other
solutions for the autonomic control loop, we will briefly
discuss the creation of utility functions. In section II we saw
that despite their advantages, utility functions remain hard
to design, leading to research to simplify and automate the
process. Since this area of interest is relatively new, there
isn’t much data comparing automatically-generated utility
functions against standard hand-designed functions. In [13]
however, they compare their genetically programmed utility
function against a hand-designed one. The hand-designed utility function is a threshold function that takes into account the
current value, and difference to last measured value for every
monitored resource. This leads to a threshold that depends on
32 parameters; by comparison, the evolved functions use only
4 to 8 parameters. Better yet, the generated utility functions
were about 10% better than the already good hand-designed
one.
Although we have seen that utility functions ease the architectural requirements of the system, we have not yet discussed
their actual efficiency. We’ve seen that [8] has proposed an
autonomic-workload manager for cloud computing, and that
one of their examples was query workload execution, where
utility was defined in terms of response time and QoS. The
aim was to test the number of QoS goals met, under a varying
QoS tolerance. Five different configurations were tested: No
Adapt, in which no adaptation takes place; Adapt 1, which uses
an ECA policy; Adapt 2 in which utility is only considered
in response time terms; Adapt 3 in which Adapt 2 is only
applied when it is predicted that response time targets will be
missed, and Adapt 4 in which utility is considered in terms of
QoS. Two interesting facts can be extracted from the results
shown in Fig.7. The first is that ECA policy actually performs
Fig. 7.
Testing the number of QoS goals met by the autonomic query
workload execution
In [4] they also test their utility-based autonomic system
against more standard algorithms, testing their implementation
of a load balancer against two different configurations: one
in which the resources (clustered servers) are dedicated to
different classes of customers, and the other in which the
customers are distributed according to a standard round robin
algorithm (RR) irrespective of class. The results are in Fig.8.
Results are measured in utility, which is reasonable since a
well designed utility function will correspond to the business
requirements of the system. In this case we can see the
autonomic system far outperforming the standard server configurations, as when a large surge of lowest value customers
arrives, it is able to discriminate between their utility and the
utility of higher priority customers, and favour the latter.
Finally, in [2], they make a choice of favouring emergent
overall utility for their autonomic self-deploying system rather
then having a central manager to oversee the global utility of
the system. Instead of testing their solution against existing
solutions, they choose to test against an optimal algorithm,
which has a global knowledge of the system and which will
choose the best configuration, and a semi-optimal algorithm
which has full knowledge of the system but which will
deploy in a greedy manner. The results in Fig.9 are extremely
interesting: the optimal algorithm naturally has the highest
utility, but at a prohibitive cost in a reasonably large system;
the autonomic system on the other hand is very fast, and
from 8 vertices onwards obtains a greater overall utility than
the semi-optimal algorithm, despite being about an order of
magnitude faster than it.
From the above examples it is clear that another advantage
6
explore the solution space.
VI. C ONCLUSION
In this state of the art we have presented the role of
autonomic computing, as well as the use of utility functions
within autonomic systems. And despite the fact that the papers
examined within this state of the art were chosen according to
the topics they discussed, rather than the detail and conclusions
within those papers, we have been able to isolate a few
trends within autonomic systems and draw several interesting
conclusions. The most important of these is that not only
do autonomic systems bring clear advantages in terms of
functionality to a system, but they also lead to better results.
For systems that use utility-based policy this observation is
even more relevant: the design is simpler, and the performance
is better. Naturally, utility functions remain hard to design, but
with experience and successful research into the automated
design of those functions, the problem is far from intractable.
As far as trends within utility-based autonomic systems are
concerned, the only relatively consistent design trait seems
to be the use of multiple autonomic agents, with or without
a central managing system. This is not surprising as many
concepts of autonomic systems are drawn from agent-based
systems, including the use of utility. However, as these systems
perform well, even when using emergent properties for overall
utility, this may be a valid design. To conclude, we can say
that current research seems to validate the use of autonomic
systems and utility functions, but work remains; mostly to
equally research all “self-” properties but also to develop a
coherent set of design best practices.
Fig. 8. A website using autonomic load-balancers compared to dedicated
and RR load-balancing
R EFERENCES
[1] Ronen I. Brafman and Yagil Engel. Directional decomposition of
multiattribute utility functions. In Proceedings of the 1st International
Conference on Algorithmic Decision Theory, ADT ’09, pages 192–202,
Berlin, Heidelberg, 2009. Springer-Verlag.
[2] Debzani Deb. Achieving self-managed deployment in a distributed
environment via utility functions. PhD thesis, Bozeman, MT, USA, 2008.
AAI3297699.
[3] Paul deGrandis and Giuseppe Valetto. Elicitation and utilization of
application-level utility functions. In Proceedings of the 6th international conference on Autonomic computing, ICAC ’09, pages 107–116,
New York, NY, USA, 2009. ACM.
[4] J.M. Ewing and D.A. Menascea. Business-oriented autonomic load
balancing for multitiered web sites. In Modeling, Analysis Simulation of
Computer and Telecommunication Systems, 2009. MASCOTS ’09. IEEE
International Symposium on, pages 1 –10, sept. 2009.
[5] P. Horn. Autonomic computing: Ibm perspective on the state of
information technology. In IBM Germany Scientific Symposium Series,
2001.
[6] Markus C. Huebscher and Julie A. McCann. A survey of autonomic
computing - degrees, models, and applications. ACM Comput. Surv.,
40:7:1–7:28, August 2008.
[7] Markus C. Huebscher, Julie A. McCann, and Asher Hoskins. Context
as autonomic intelligence in a ubiquitous computing environment. Int.
J. Internet Protoc. Technol., 2:30–39, December 2007.
[8] Norman W. Paton, Marcelo A. T. Aragão, Kevin Lee, Alvaro A. A.
Fernandes, and Rizos Sakellariou. Optimizing utility in cloud computing through autonomic workload execution. IEEE Data Eng. Bull.,
32(1):51–58, 2009.
[9] Andres J. Ramirez, David B. Knoester, Betty H.C. Cheng, and Philip K.
McKinley. Applying genetic algorithms to decision making in autonomic
computing systems. In Proceedings of the 6th international conference
on Autonomic computing, ICAC ’09, pages 97–106, New York, NY,
USA, 2009. ACM.
Fig. 9. Autonomic deployment in a distributed system compared against
optimal and semi-optimal algorithms
of using utility functions in autonomic systems is that we can
expect better performance. This could in fact be expected from
the optimisation algorithms used in conjunction with the utility
function. Since the size of the solution space is constant no
matter what policy is used , a reasonably efficient and fast
optimisation algorithm is bound to outperform hand-designed
systems or systems using algorithms that do not attempt to
7
[10] G. Tesauro, N. K. Jong, R. Das, and M. N. Bennani. A hybrid
reinforcement learning approach to autonomic resource allocation. In
Proceedings of the 2006 IEEE International Conference on Autonomic
Computing, pages 65–73, Washington, DC, USA, 2006. IEEE Computer
Society.
[11] Gerald Tesauro and Jeffrey O. Kephart. Utility functions in autonomic
systems. In Proceedings of the First International Conference on
Autonomic Computing, pages 70–77, Washington, DC, USA, 2004. IEEE
Computer Society.
[12] Wikipedia. Autonomic nervous system — wikipedia, the free encyclopedia, 2011. [Online; accessed 23-June-2011].
[13] Sunny Wong, Melissa Aaron, Jeffrey Segall, Kevin Lynch, and Spiros
Mancoridis. Reverse engineering utility functions using genetic programming to detect anomalous behavior in software. In Proceedings of
the 2010 17th Working Conference on Reverse Engineering, WCRE ’10,
pages 141–149, Washington, DC, USA, 2010. IEEE Computer Society.
8