123 Multiagent System Technologies

LNAI 8732
Jörg P. Müller
Michael Weyrich
Ana L.C. Bazzan (Eds.)
Multiagent System
Technologies
12th German Conference, MATES 2014
Stuttgart, Germany, September 23–25, 2014
Proceedings
123
Lecture Notes in Artificial Intelligence
Subseries of Lecture Notes in Computer Science
LNAI Series Editors
Randy Goebel
University of Alberta, Edmonton, Canada
Yuzuru Tanaka
Hokkaido University, Sapporo, Japan
Wolfgang Wahlster
DFKI and Saarland University, Saarbrücken, Germany
LNAI Founding Series Editor
Joerg Siekmann
DFKI and Saarland University, Saarbrücken, Germany
8732
Jörg P. Müller Michael Weyrich
Ana L.C. Bazzan (Eds.)
Multiagent System
Technologies
12th German Conference, MATES 2014
Stuttgart, Germany, September 23-25, 2014
Proceedings
13
Volume Editors
Jörg P. Müller
Technische Universität Clausthal
Institut für Informatik
Julius-Albert-Str. 4, 38678 Clausthal-Zellerfeld, Germany
E-mail: [email protected]
Michael Weyrich
Universität Stuttgart
Institut für Automatisierungs- und Softwaretechnik
Pfaffenwaldring 47, 70550 Stuttgart, Germany
E-mail: [email protected]
Ana L.C. Bazzan
Universidade Federal do Rio Grande do Sul (UFRGS)
Instituto de Informática
Caixa Postal 15064, 91501-970 Porto Alegre, Rio Grande do Sul, Brazil
E-mail: [email protected]
ISSN 0302-9743
e-ISSN 1611-3349
ISBN 978-3-319-11583-2
e-ISBN 978-3-319-11584-9
DOI 10.1007/978-3-319-11584-9
Springer Cham Heidelberg New York Dordrecht London
Library of Congress Control Number: 2014948672
LNCS Sublibrary: SL 7 – Artificial Intelligence
© Springer International Publishing Switzerland 2014
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology
now known or hereafter developed. Exempted from this legal reservation are brief excerpts in connection
with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and
executed on a computer system, for exclusive use by the purchaser of the work. Duplication of this publication
or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher’s location,
in ist current version, and permission for use must always be obtained from Springer. Permissions for use
may be obtained through RightsLink at the Copyright Clearance Center. Violations are liable to prosecution
under the respective Copyright Law.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
While the advice and information in this book are believed to be true and accurate at the date of publication,
neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or
omissions that may be made. The publisher makes no warranty, express or implied, with respect to the
material contained herein.
Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India
Printed on acid-free paper
Springer is part of Springer Science+Business Media (www.springer.com)
Preface
This book contains the proceedings of the 12th German conference on Multiagent System Technologies (MATES 2014) which was held during September
23–25, 2014, in Stuttgart, Germany. The MATES conference series aims at the
promotion of and the cross-fertilization between theory and application of intelligent agents and multiagent systems. It provides an interdisciplinary forum
for researchers and members of business and industry to present and discuss
latest advances in agent-based computing with prototyped or fielded systems in
various application domains.
The MATES 2014 conference was organized in cooperation with the Distributed Artificial Intelligence chapter of the German Society for Informatics
(GI), and sponsored by the GI. Moreover, it was co-located with the 37th German Conference on Artificial Intelligence and the 44th Symposium of the German Society for Informatics (INFORMATIK 2014). It also contained a joint
session on “Electric Mobility, Agents, and Smart Grids” in cooperation with the
GI Special Interest Group on energy information systems.
The set of regular MATES 2014 conference talks covered a broad area of
topics of interest including market mechanisms, negotiation, game theory, agentoriented planning, learning and control, multiagent systems engineering, as well
as agent-based modeling and simulation. In keeping with its tradition, MATES
2014 also offered three excellent invited keynotes by well-known, reputed scientists in the domain, covering relevant topics of the broad area of intelligent
agent technology: Ulle Endriss from Universiteit van Amsterdam addressed the
role of computational social choice theory for multiagent systems; Peter Göhner
from Unversität Stuttgart spoke about multiagent systems in applications of
industrial automation; and Wolfram Burgard gave an overview of probabilistic
techniques for mobile robot navigation (the latter keynote was shared with the
KI 2014 conference).
Furthermore, the MATES doctoral mentoring program, chaired by Sebastian
Lehnhoff, supported PhD students in their research and application of multiagent
system technologies by providing the opportunity to present and intensively
discuss their work with other students and experts of the field.
MATES 2014 received 31 submissions. Each paper was peer-reviewed by at
least three members of an international Program Committee. Nine papers were
accepted for long presentation, and seven papers were accepted for short presentation. They are included in this volume. In addition, the book contains two
invited papers from our keynote speakers. It is our hope that the balanced set of
theoretical and application-oriented contributions contained in this volume will
stimulate further research in multiagent systems and technologies.
As co-chairs and in the name of the MATES Steering Committee, we are very
thankful to the authors and invited speakers for contributing to this conference,
VI
Preface
Sebastian Lehnhoff for chairing the Doctoral Consortium, the Program Committee members and additional reviewers for their timely and helpful reviews
of the submissions, as well as the local organization team at the University of
Stuttgart, especially Fernando Cimander, who did an excellent job maintaining
the MATES conference homepage, and Stefan Wagner, who was our main liaison
to the INFORMATIK 2014 conference. They all contributed in making MATES
2014 a success. We are also indebted to Alfred Hofmann and the whole Springer
LNAI team for their very kind and excellent assistance in publishing these proceedings and for their continuing support for the MATES conference over the
past twelve editions.
Finally, we hope you enjoyed MATES 2014 and drew some useful inspiration
and insights from attending it.
September 2014
Jörg P. Müller
Michael Weyrich
Ana L.C. Bazzan
Organization
The MATES 2014 conference was organized in colocation with the 37th German
Conference on Artificial Intelligence and the 44th Symposium of the German
Society for Informatics (INFORMATIK 2014), and in cooperation with KI 2014
and with the GI Special Interest Group on Energy Information Systems.
MATES 2014 Chairs
Jörg P. Müller
Michael Weyrich
Ana L.C. Bazzan
TU Clausthal, Germany
Universität Stuttgart, Germany
UFRGS, Brazil
Doctoral Consortium Chair
Sebastian Lehnhoff
Universität Oldenburg, Germany
Program Committee
Thomas Ågotnes
Luis Antunes
Ana Bazzan
Vicent Botti
Lars Braubach
Nils Bulling
Massimo Cossentino
Célia Da Costa Pereira
Mehdi Dastani
Paul Davidsson
Yves Demazeau
Jörg Denzinger
Frank Dignum
Jürgen Dix
Barbara Dunin-Keplicz
Torsten Eymann
Maria Ganzha
Marie-Pierre Gleizes
Vladimir Gorodetsky
Daniel Hennes
Koen Hindriks
Gal Kaminka
Wolfgang Ketter
Franziska Klügl
Matthias Klusch
Andrew Koster
Winfried Lamersdorf
Paulo Leitao
Beatriz López
Felipe Meneguzzi
Emma Norling
Ingrid Nunes
Andrea Omicini
Sascha Ossowski
Marcin Paprzycki
Mathias Petsch
Paolo Petta
Alexander Pokahr
Alessandro Ricci
Jörg Rothe
Jordi Sabater Mir
René Schumann
VIII
Organization
Rainer Unland
Wiebe Van Der Hoek
László Zsolt Varga
Matteo Vasirani
Gerhard Weiss
Franco Zambonelli
Onn Shehory
David Sislak
Michael Sonnenschein
Matthias Thimm
Ingo J. Timm
Adelinde Uhrmacher
Auxiliary Reviewers
Tobias Ahlbrecht
John Collins
Maksims Fiosins
Yixin Lu
Marin Lujak
Marcin Paprzycki
Weronika Radziszewska
Patrizia Ribino
Luca Sabatucci
Federico Schlesinger
Valeria Seidita
Doctoral Consortium PC Members/Mentors
Jörg P. Müller
Alexander Pokahr
Jan-Philipp Steghöfer
Matthias Thimm
Massimo Cossentino
Maria Ganzha
Matthias Klusch
Felipe Meneguzzi
MATES Steering Committee
Matthias Klusch
Winfried Lamersdorf
Jörg P. Müller
Paolo Petta
Ingo Timm
Rainer Unland
DFKI GmbH, Germany
Universität Hamburg, Germany
TU Clausthal, Germany
University of Vienna, Austria
Universität Trier, Germany
Universität Duisburg-Essen, Germany
Table of Contents
Mechanisms, Negotiation, and Game Theory
Social Choice Theory as a Foundation for Multiagent Systems . . . . . . . . .
Ulle Endriss
1
A Boolean Game Based Modeling of Socio-Technical Systems . . . . . . . . . .
Nils Bulling
7
Evaluating Strategies for Penny Auctions Using Multi-Agent Systems . . .
Fabian Lorig, Matthias Gräf, Steffen Walter, and Ingo J. Timm
26
Robustness Analysis of Negotiation Strategies through Multiagent
Learning in Repeated Negotiation Games . . . . . . . . . . . . . . . . . . . . . . . . . . .
Jianye Hao, Siqi Chen, Gerhard Weiss, Ho-fung Leung, and
Karl Tuyls
41
Using Multi-attribute Combinatorial Auctions for Resource
Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Ferran Torrent-Fontbona, Albert Pla, and Beatriz López
57
A Negotiation-Based Genetic Framework for Multi-Agent Credit
Assignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Kaveh Pashaei, Fattaneh Taghiyareh, and Kambiz Badie
72
Multiagent Planning, Learning, and Control
Agent-Based Concepts for Manufacturing Automation . . . . . . . . . . . . . . . .
Peter Göhner and Michael Weyrich
90
Orchestrating the Sequential Execution of Tasks by a Heterogeneous
Set of Asynchronous Mobile Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Shashi Shekhar Jha and Shivashankar B. Nair
103
A Conceptual Framework of a Decision Support System for Operational
Dispatching of Agricultural Bulk Goods – An Agent-Based Approach . . .
Jens Mehmann and Frank Teuteberg
121
Planning with Numeric Key Performance Indicators over Dynamic
Organizations of Intelligent Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Florian Pantke, Stefan Edelkamp, and Otthein Herzog
138
Employing Automatic Temporal Abstractions to Accelerate Utile Suffix
Memory Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Erkin Çilden and Faruk Polat
156
X
Table of Contents
The Effects of Variation on Solving a Combinatorial Optimization
Problem in Collaborative Multi-Agent Systems . . . . . . . . . . . . . . . . . . . . . .
Christian Hinrichs and Michael Sonnenschein
170
Multiagent Systems Engineering, Modeling
and Simulation
Complexity Measurement of Multi-Agent Systems . . . . . . . . . . . . . . . . . . . .
Toufik Marir, Farid Mokhati, Hassina Bouchelaghem-Seridi, and
Zouheyr Tamrabet
Extensible Java EE-based Agent Framework in Clustered
Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Dejan Mitrović, Mirjana Ivanović, Milan Vidaković, and
Zoran Budimac
Programming BDI Agents with Pure Java . . . . . . . . . . . . . . . . . . . . . . . . . . .
Alexander Pokahr, Lars Braubach, Christopher Haubeck, and
Jan Ladiges
AGADE: How Individual Guidance Leads to Group Behaviour
and How This Can Be Simulated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Thomas Farrenkopf, Michael Guckert, Benjamin Hoffmann, and
Neil Urquhart
A Tree-Based Context Model to Optimize Multiagent Simulation . . . . . .
Flavien Balbo, Mahdi Zargayouna, and Fabien Badeig
188
202
216
234
251
Agent-Based Modeling and Simulation of the Emotional and Behavioral
Dynamics of Human Civilians during Emergency Situations . . . . . . . . . . .
Mouna Belhaj, Fahem Kebair, and Lamjed Ben Said
266
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
283
Social Choice Theory as a Foundation
for Multiagent Systems
Ulle Endriss
Institute for Logic, Language and Computation (ILLC)
University of Amsterdam, The Netherlands
Abstract. Social choice theory is the study of mechanisms for collective decision making. While originally concerned with modelling and
analysing political decision making in groups of people, its basic principles, arguably, are equally relevant to modelling and analysing the kinds
of interaction taking place in a multiagent system. In support of this position, I review examples from three strands of research in social choice
theory: fair division, voting, and judgment aggregation.
1
Introduction
Multiagent systems are systems composed of several autonomous agents, e.g.,
computer programs or robots, that interact with each other in a variety of ways,
including coordination, cooperation, and competition. There are clear parallels
to the ways in which individual people interact with each other as members of
our society. It therefore is not surprising that formalisms and methods developed
in the social and economic sciences, originally intended for the purposes of modelling and analysing humans and human society, have found rich applications
also in the field of autonomous agents and multiagent systems.
A relatively recent example for this phenomenon is social choice theory, the
study of mechanisms for collective decision making [1]. In this short paper I want
to review some of the basic ideas developed in social choice theory and argue for
their relevance to multiagent systems.
Social choice theory is usually considered part of economic theory, although
besides economists also political scientists, philosophers, and mathematicians
have contributed significantly to its development as a discipline. More recently,
social choice theory, and specifically computational social choice [4,2], has also
become a hot topic in computer science and artificial intelligence, with many
new conceptual and technical contributions coming from this community.
To sketch the scope of social choice theory, particularly in view of possible
applications in multiagent systems, it is useful to consider it side by side with
two neighbouring disciplines, namely decision theory and game theory. Decision
theory provides us with tools to model and analyse the decision-making capabilities of a single intelligent agent. For an agent to make a decision it needs to
understand how its actions will impact on the state of the world around it and it
needs to realise its own preferences over these possible alternative states of the
J.P. Müller, M. Weyrich, and A.L.C. Bazzan (Eds.): MATES 2014, LNAI 8732, pp. 1–6, 2014.
c Springer International Publishing Switzerland 2014
2
U. Endriss
world, so as to be able to choose the best action. That is, decision theory is the
right framework when we want to model a single agent vis à vis nature. Once
we zoom in on nature and want to specifically model the fact that part of nature
consists of other agents taking their own decisions, we enter the realms of game
theory. This is the right framework to work in when, still taking the perspective
on an individual agent, we want to model that agent’s strategic behaviour vis à
vis other agents. That is, while decision theory considers one agent at a time,
game theory is concerned with such an agent’s interactions with other agents.
Social choice theory, finally, goes one step further in that direction and is concerned with such a group of interacting agents as a whole. In social choice theory
we ask questions such as “what is good for this group?”, while we ask “what is
good for this agent?” in decision theory and “how can this agent do well, given
that others try too?” in game theory.
In the remainder of this paper I will briefly review three technical frameworks
for social choice theory that model different aspects of collective decision making:
deciding on a fair allocation of resources to agents, making a choice affecting all
agents in view of their individual preferences, and aggregating the judgments of
different agents regarding a number of logically related statements to come to a
consistent view that appropriately reflects the position taken by the group as a
whole. In other words, I will review the frameworks of fair division (Section 2),
voting (Section 3), and judgment aggregation (Section 4).
2
Fair Allocation of Resources
Resource allocation plays a central role in multiagent systems research. For one,
the problems people try to address by designing a multiagent system often just
are problems of resource allocation (e.g., applications in electronic commerce).
But also for other types of applications we often have to solve a resource allocation problem (e.g., allocating computing resources to different agents) along
the way before our agents are in a position to solve the problem they have been
designed to address.
Many resource allocation problems can be modelled as follows. We have a
set of agents N = {1, . . . , n} and a set of (indivisible) goods G = {g1 , . . . , gm }.
Each agent i ∈ N is equipped with a utility function ui : 2G → R, mapping
sets of goods to real numbers indicating the value the agent assigns to the set
in question. An allocation is a function A : N → 2G , mapping each agent to a
set of goods, that respects A(i) ∩ A(j) = ∅ for all i = j ∈ N , i.e., no item may
be assigned to more than one agent.
The question then arises: what is a good allocation? The answer most frequently given in the context of multiagent systems is that it should be an allocation that maximises the sum
of individual utilities. That is, it should be an
allocation A that maximises i∈N ui (A(i)), the utilitarian social welfare of A.
This certainly will be the right objective function to optimise in certain contexts. For example, if the utility experienced by each agent represents monetary
revenue generated by that agent and as system designers we collect this revenue
Social Choice Theory as a Foundation for Multiagent Systems
3
from all our agents, then what we are interested in is indeed the sum of individual utilities. But if the agents represent individual clients that make use of a
multiagent platform we provide, then we may have rather different objectives.
We may, for instance, wish to guarantee that individual agents are treated in a
fair manner, so as to improve the user experience of our clients. Social choice
theory offers a number of useful definitions for how to make the vague notion of
fairness precise in the context of resource allocation [14]:
– We could look for an allocation A that maximises the egalitarian social
welfare min{ui (A(i)) | i ∈ N }, i.e., the utility of the agent that
is worst off.
– Alternatively, we could maximise the Nash social welfare
i∈N ui (A(i)),
which may be considered a compromise between the utilitarian and the egalitarian point of view, as it sanctions both increases in total efficiency and
reallocations from rich to poor agents.
– Or we could look for an allocation A that is envy-free, i.e., where ui (A(i)) ui (A(j)) for all i, j ∈ N , meaning that every agent i values the set A(i) of
goods assigned to it no less than the set A(j) assigned to any other agent j.
Elsewhere, my coauthors and I have argued for a systematic exploitation of
the rich variety of fairness criteria developed in the classical literature on social
choice theory as design objectives for multiagent systems [3].
An important class of fairness criteria that is not yet widely used in multiagent
systems are inequality indices, quantifying the degree of economic inequality in
a group of agents (see [6] for an introduction aimed at computer scientists).
3
Voting
Voting is a framework for choosing a best alternative from a given set of available
alternatives, given the preferences of a group of voters over these alternatives.
The classical example is that of political elections, where the alternatives are
the candidates standing for election and voters express their preferences on the
ballot sheet. But voting rules can also be used in many other contexts, including
multiagent systems. For example, the alternatives may be different plans available to a group of agents to execute together, and each agent may have their own
preferences over alternative plans, determined by the information and reasoning
capabilities available to them.
Social choice theory provides a simple mathematical framework for modelling
the process of voting. Its ingredients are a set of agents N = {1, . . . , n} (the
voters), a set of alternatives X = {x1 , . . . , x }, and one preference order i for
each agent i ∈ N . Every such preference order i is taken to be a strict linear
order on X (i.e., a binary relation that is irreflexive, transitive, and complete).
We write L(X) for the set of all such linear orders. A voting rule is a function
F : L(X)n → 2X \ {∅}, mapping any given profile of preference orders (one for
each agent) to a nonempty set of winning alternatives (due to the possibility of
ties we cannot be sure to always obtain a single winner).
Voting theory provides many different such rules [16]. To exemplify the range,
let us define two of them here:
4
U. Endriss
– Under the Borda rule, every alternative obtains − k points whenever a
voter ranks that alternative in position k in her preference ordering. Thus,
if there are 10 alternatives, then your most preferred alternative receive 9
points from you, your second most preferred alternative receives 8 points,
and so forth. The alternative with the most points wins.
– Under the Copeland rule, we elect the alternative that wins the largest number of pairwise majority contests against other alternatives (with half a point
awarded for a draw).
Observe that the so-called plurality rule, under which every voter can nominate
one alternative and the alternative with the most nominations wins the election,
also fits into this framework. The reason is that we can think of it as the rule
under which the alternative ranked on top most often wins. The plurality rule is
the rule used in most political elections, but it does in fact have many undesirable
properties. For example, in the scenario below, z wins under the plurality rule,
yet every alternative other than z is preferred to z by a strict majority:
2 agents: x y z
2 agents: y x z
3 agents: z x y
Observe that both the Borda rule and the Copeland rule will instead elect the
intuitively “right” winner, namely x.
When designing a decision making mechanism for a multiagent system, it is
important to refer back to the classical literature on voting theory, which has
examined many of the questions of relevance here in depth. Having said this, multiagent systems will often give rise to requirements that are somewhat different
from those arising in the context of elections amongst humans. For example, due
to the bounded rationality of autonomous software agents (and due to the fact
that we can model such bonded rationality much more precisely in such cases
than for a human agent), we may wish to drop or alter some of the classical
assumptions regarding preferences. Specifically, the assumption of completeness,
i.e., an agent’s ability to rank any two alternatives, will not always be appropriate. This insight has led to work in the artificial intelligence literature on voting
with nonstandard preferences [15,8]. In a similar vain, the alternatives that we
need to choose between in the context of multiagent systems may not always be
just simple “atomic” options, but they may come with some rich internal structure. This has lead to the research direction of voting in combinatorial domains
in computational social choice [11,5].
Observe that, in principle, we could use voting rules also in the context of
resource allocation. If we think of the set of all possible allocations of resources
to agents as the set of alternatives, then each agent’s utility function defines its
preference relation over these alternatives.
4
Judgment Aggregation
Preferences are not the only types of structures we may wish to aggregate in a
multiagent system. Other examples include in particular individual judgments
Social Choice Theory as a Foundation for Multiagent Systems
5
regarding the truth or falsehood of certain statements. Such questions have
been investigated in the literature on belief merging [10] and judgment aggregation [13], as well as the closely related binary aggregation [9].
Let us briefly review the basic framework of judgment aggregation here. An
agenda is a finite set of formulas of propositional logic that is of the form Φ =
Φ+ ∪{¬ϕ | ϕ ∈ Φ+ }, with Φ+ only including non-negated formulas. Now consider
a set of agents N = {1, . . . , n} such that each agent i ∈ N picks a judgment set
Ji ⊆ Φ that is logically consistent and that includes either ϕ or ¬ϕ for every
ϕ ∈ Φ+ . Let J (Φ) be the set of all such consistent and complete judgment sets
for the agenda ϕ. An aggregator is a function F : J (Φ)n → 2Φ , mapping profiles
of such judgment sets (one for each agent) to subsets of the agenda (i.e., to
judgment sets that may or may not also be consistent and complete).
An example is the majority rule, under which we accept all those formulas
that are accepted by a majority of the agents. Unfortunately, this simple rule
is often not satisfactory, as it can lead to paradoxical outcomes. For example,
suppose that Φ+ = {p, q, p ∧ q}. The table below show a profile with three agents
(we assume that an agent accepts ¬ϕ if and only if it does not accept ϕ):
p q p∧q
Agent 1: Yes Yes Yes
Agent 2: Yes No No
Agent 3: No Yes No
Majority: Yes Yes No
Thus, even though each individual agent declares a consistent judgment set
(e.g., agent 2 claims p is true and q is false, i.e., p ∧ q is also false), the majority
judgment set we obtain is inconsistent: you cannot accept both p and q but also
reject p ∧ q. There are several proposals for alternative methods of aggregation
that avoid this kind of problem, albeit often at the price of significantly increased
complexity [12]. This includes, for instance, distance-based methods where we
choose as outcome a consistent judgment set that minimises, in some sense,
the distance to the input profile. Another approach amounts to choosing from
amongst the agents one that may be considered a good representative of the
group, to then implement that agent’s advice [7].
In the same way as resource allocation problems can, in principle, be studied
as voting problems, voting can, in principle, be embedded into judgment aggregation. The basic idea is to work with propositional variables of the form pxy ,
acceptance of which would then signal that an agent prefers x to y.
5
Conclusion
We have reviewed three frameworks for collective decision making and discussed
their relevance to multiagent systems. Judgment aggregation and the closely
related binary aggregation are the most general amongst them: they, in principle,
allow us to aggregate any kind of information. The other two frameworks, voting
and fair division, deal with one specific type of information to be aggregated,
6
U. Endriss
namely information on the preferences of the agents. Fair division, again, is more
specific than voting, as it imposes specific constraints on the types of preferences
held by agents over alternative outcomes. Despite this relative narrowness of
scope, or maybe because of it, of the ideas discussed here, those pertaining
to resource allocation are clearly the most widely used in multiagent systems
research today. But also the others have clear potential for multiagent systems.
References
1. Arrow, K.J., Sen, A.K., Suzumura, K. (eds.): Handbook of Social Choice and Welfare. North-Holland (2002)
2. Brandt, F., Conitzer, V., Endriss, U.: Computational social choice. In: Weiss, G.
(ed.) Multiagent Systems, pp. 213–283. MIT Press (2013)
3. Chevaleyre, Y., Endriss, U., Estivie, S., Maudet, N.: Welfare engineering in practice: On the variety of multiagent resource allocation problems. In: Gleizes, M.-P.,
Omicini, A., Zambonelli, F. (eds.) ESAW 2004. LNCS (LNAI), vol. 3451, pp. 335–
347. Springer, Heidelberg (2005)
4. Chevaleyre, Y., Endriss, U., Lang, J., Maudet, N.: A short introduction to computational social choice. In: van Leeuwen, J., Italiano, G.F., van der Hoek, W.,
Meinel, C., Sack, H., Plášil, F. (eds.) SOFSEM 2007. LNCS, vol. 4362, pp. 51–69.
Springer, Heidelberg (2007)
5. Chevaleyre, Y., Endriss, U., Lang, J., Maudet, N.: Preference handling in combinatorial domains: From AI to social choice. AI Magazine 29(4), 37–46 (2008)
6. Endriss, U.: Reduction of economic inequality in combinatorial domains. In: Proc.
12th International Conference on Autonomous Agents and Multiagent Systems,
AAMAS (2013)
7. Endriss, U., Grandi, U.: Binary aggregation by selection of the most representative
voter. In: Proc. 28th AAAI Conference on Artificial Intelligence (2014)
8. Endriss, U., Pini, M.S., Rossi, F., Venable, K.B.: Preference aggregation over restricted ballot languages: Sincerity and strategy-proofness. In: Proc. 21st International Joint Conference on Artificial Intelligence, IJCAI (2009)
9. Grandi, U., Endriss, U.: Binary aggregation with integrity constraints. In: Proc.
22nd International Joint Conference on Artificial Intelligence, IJCAI (2011)
10. Konieczny, S., Pino Pérez, R.: Merging information under constraints: A logical
framework. Journal of Logic and Computation 12(5), 773–808 (2002)
11. Lang, J.: Logical preference representation and combinatorial vote. Annals of Mathematics and Artificial Intelligence 42(1-3), 37–71 (2004)
12. Lang, J., Slavkovik, M.: How hard is it to compute majority-preserving judgment
aggregation rules? In: Proc. 21st European Conference on Artificial Intelligence,
ECAI (2014)
13. List, C., Puppe, C.: Judgment aggregation: A survey. In: Anand, P., Pattanaik,
P., Puppe, C. (eds.) Handbook of Rational and Social Choice. Oxford University
Press (2009)
14. Moulin, H.: Axioms of Cooperative Decision Making. Cambridge University Press
(1988)
15. Pini, M.S., Rossi, F., Venable, K.B., Walsh, T.: Aggregating partially ordered preferences. Journal of Logic and Computation 19(3), 475–502 (2008)
16. Taylor, A.D.: Social Choice and the Mathematics of Manipulation. Cambridge
University Press (2005)
A Boolean Game Based Modeling
of Socio-Technical Systems
Nils Bulling
Department of Informatics
Clausthal University of Technology, Germany
[email protected]
Abstract. In this paper we introduce a formal model of socio-technical
systems, in which communication and cooperation is limited, using (constrained) Boolean games. We study properties, in particular related to
stability, of the emerging behavior of a socio-technical system.
1
Introduction
Socio-technical systems theory is concerned with the interplay between technical and social systems and their joint optimization [21]. The technical part of a
socio-technical system (STS) represents, e.g., the physical infrastructure, available resources, and other non-social aspects. The social system includes agents,
their social relations, their goals etc. Similar to multi-agent systems [22] (MASs),
agents in STSs are often autonomous and partially self-interested, for example
employees work for a company but they are not their weak-willed slaves without
own interests. Hence, STSs are highly dynamic, even if the technical system is
fixed, the social system is subject to frequent changes (e.g. employees leave or
join a company, social norms change etc.). An STS can often be considered as
a system of subsystems; we call them—in this paper—organization units. These
units are somehow independent: information exchange, cooperation and communication between them can be limited and they have their own organization
(sub)objectives. There can be various reasons for that, for example an insufficient IT-infrastructure, time and cost constraints, competition and conflicting
interests. This is a major obstacle for the design of an STS as its overall behavior emerges from the behaviors of its organization units; e.g. the behavior
of a large globally operating company emerges from the behavior of its various
sites. As a consequence, decisions and actions taken in these independent units
are interrelated and need to be coordinated to obtain a desired global system
behavior [20,8,4,18]. This shows that to design effective, cost-efficient, stable,
robust, adaptive STSs is an extremely challenging task. The research question
addressed in this paper is as follows: how to formally model and to analyse
STSs in which communication and cooperation is limited and/or information
is restrained by involved agents (e.g. due to competing interests), in order to
design good, stable STSs? Our model of an STS draws inspiration from Boolean
games. Boolean games [10,11,3,14] represent a compact, computational model
J.P. Müller, M. Weyrich, and A.L.C. Bazzan (Eds.): MATES 2014, LNAI 8732, pp. 7–25, 2014.
c Springer International Publishing Switzerland 2014
8
N. Bulling
of cooperation among autonomous agents. Agents control some propositional
variables—they decide on their truth values—and try to satisfy their individual
goals which are given as propositional formulae. Agents can usually not satisfy
their goals on their own: they often depend on the actions of other self-interested
agents. This requires cooperation and strategic reasoning. We use Boolean games
to model organization units in STSs. Each organization unit has an organization
objective which may not be publicly known to other organization units. Consequently, to achieve a good global behavior parts of these objectives must be
disclosed to other units—as much as is needed to obtain a good behavior, but
not too much to preserve confidentiality—in order to facilitate cooperation and
coordination. We introduce confidentiality constraints in order to check whether
the public information does not reveal confidential information. We also investigate the existence of incentive schemes to stabilize an STS. In summary, the
contributions of this paper are as follows. A formal model of STSs which makes
use of techniques from MASs. We propose to “distribute” Boolean games to
stabilize the system behavior, and to use propositional formulae as means for
coordinating between these games. We show how incentive engineering [23] can
be applied to the STS domain. We also give some characterization results for
stabilizing STSs.
The paper is structured as follows. First, we recall Boolean games and propositional logic. Then, we propose constrained Boolean games, a minor extension
of Boolean games, define the concept of Nash equilibrium, and recall incentive
schemes. In Section 3 we present our formal modeling framework—the key contribution of this paper. In Section 4 we analyse the existence of stable/good STSs
and give some characterization results. In Section 5 and 6 we discuss related work
and conclude, respectively.
2
Preliminaries and Constrained Boolean Games
In this section we review Boolean games and introduce constrained Boolean
games, a variant of which will later be used to model organization units.
2.1
Preliminaries: Propositional Logic and Boolean Games
Propositional Logic. Let Π be a set of (propositional) variables and X ⊆ Π
a non-empty subset. We use PL(X) to refer to the set of propositional formulae
using propositional variables from X. We assume the standard logical connectives
¬, ∧, ∨ and →. An X-assignment is a function ξ : X → B, where B = {t, f} is
the set of Boolean truth values, assigning a truth value to each variable in X.
ξ|Y refers to the assignment which equals ξ but the domain of which is restricted
to Y . We write ξ |= ϕ if X-assignment ξ satisfies ϕ ∈ PL(Y ) where ∅ = Y ⊆ X.
A formula ϕ over X is satisfiable (resp. valid ) if there is an X-assignment which
satisfies ϕ (resp. if all X-assignments satisfy ϕ). If clear from context we will omit
mentioning the sets X and Y and assume that an assignment is always defined
on the variables contained in a formula. The set of all X-assignments is denoted
by AssX . Given two assignments ξ ∈ AssX and ξ ∈ AssX with X ∩ X = ∅,
A Boolean Game Based Modeling of Socio-Technical Systems
9
the X ∪ X -assignment ξ ◦ ξ is defined as: (ξ ◦ ξ )|X = ξ and (ξ ◦ ξ )|X = ξ .
Finally, for ϕ ∈ PL(Y ) and an X-assignment ξ, we denote by ϕ[ξ] the formula
that equals ϕ but each variable p ∈ X occurring in ϕ is replaced by (resp. ⊥)
if ξ(p) = t (resp. ξ(p) = f).
Boolean Games. Apart from minor modifications, we follow the definition of Boolean games of [11]. A Boolean game (BG) is a tuple G =
(N, Π, c, (γi )i∈N , (Πi )i∈N ) where N ⊆ N is a non-empty, finite set of players,
Π is a finite, non-empty set of (propositional) variables, Πi ⊆ Π is the set of
variables controlled by i ∈ N. We require that (Πi )i∈N forms a partition of a
subset of Π (as in [13] we do not require that all variables are controlled by
some player). c : Π × B → R+ is a cost function and γi ∈ PL(Π) an objective
or goal of player i. For example, c(p, t) = 4 models that setting variable p true
incurs costs of four. We write Ctrl(A) = i∈A Πi for the set of variables controlled by A ⊆ N, Π0 = Π\Ctrl(N) for the set of environmental variables, and
= Ctrl(N) = Π\Π0 to refer to the set of variables controlled by the players
Π
in N.
Example 1 (Boolean game)
(a) Let G1 = (N, Π, c, (γi )i∈N , (Πi )i∈N ) where N = {1, 2, 3}, Π = {p1 , . . . , p5 },
γ1 = (p1 ∧ p2 ) ∨ (¬p1 ∧ ¬p2 ), γ2 = (¬p1 ∧ p2 ) ∨ (¬p2 ∧ p1 ), γ3 = p1 ∧ p3 .
The variables are controlled as follows: Π1 = {p1 }, Π2 = {p2 , p5 }, and
Π3 = {p3 , p4 }. Note that the game has no environmental variables. We
define the cost function: c(p, t) = 1 and c(p, f) = 0 for all p ∈ Π\{p4, p5 }
and c(p4 , t) = c(p4 , f) = c(p5 , t) = c(p5 , f) = 1. G1 is a BG.
(b) Let G{2,3} be the BG obtained from G1 with player 1 being removed, that is
G{2,3} = ({2, 3}, Π, c, (γ2 , γ3 ), (Π2 , Π3 )). Then, variable p1 is an environmental variable (i.e. controlled by no player). Analogously, let G{1} be the BG
obtained from G1 with players 2 and 3 being removed. The environmental
variables in G{1} are Π\{p1 }.
2.2
Constrained Boolean Games and Information
We extend Boolean games with a global constraint ϕ on action profiles of the
players. Such a constraint is a propositional formula which restricts the choices of
players. Only those action profiles of players are “permitted” which satisfy ϕ. We
note that the enforcement of a constraint often requires communication or other
means to coordinate players’ actions, as players need to take into consideration
the actions of other players to obtain an assignment satisfying ϕ. This is the same
as with Nash equilibria: the solution concept says nothing about how players
should coordinate their actions in order to agree on an equilibrium.
Definition 1 (Constrained Boolean game, consistent assignment). A
constrained Boolean game is given by G = (N, Π, c, (γi )i∈N , (Πi )i∈N , ϕ) where
G = (N, Π, c, (γi )i∈N ), (Πi )i∈N ) is a BG and ϕ ∈ PL(Π) a (global) constraint.
We also write G = (G , ϕ). An assignment ξ is said to be ϕ-consistent iff ξ |= ϕ.
10
N. Bulling
with the constrained
For obvious reasons, we will identify a Boolean game G
Boolean game (G, ) and vice versa ( imposes no constraints on assignments).
A global constraint can impose restrictions on the truth values of variables of
the players as well of the environment. In this paper, we are mainly interested
in constrained Boolean games where ϕ is built over environmental variables
only. Such a constraint can be interpreted as information given to the players
about the truth values of the environmental variables. One could see that as the
players’ beliefs (based on the given information carried by the constraint) about
the behavior of the environment. This is similar to [13] where each player has
a belief about the environmental variables. In the STS setting the motivation
is that ϕ provides information about other organization units of the STS. We
assume that the information carried by ϕ is known to all players, e.g. by being
publicly announced to the players by the system principal. Finally, we note that
information can be vague; for this reason, we use propositional formulae, as also
discussed in [13], rather than (partial) truth assignments. For example, players
may be informed that x ∨ y, but they have no specific information about x nor
about y. For future work, it would also be interested to consider the case in
which players have individual information about other variables, as in [13].
Definition 2 (Boolean game with information). A constrained Boolean
game (N, Π, c, (γi )i∈N , (Πi )i∈N , ϕ) is a BG with information if ϕ ∈ PL(Π0 ).
Example 2 (Boolean game with information). The Boolean game G{2,3} from
Example 1 in combination with ϕ = p1 is a Boolean game with information. It
models that players 2 and 3 have the information that p1 is true.
2.3
Payoff and Nash Equilibrium
In the following we assume that G = (N, Π,
c, (γi )i∈N , (Πi )i∈N , ϕ) is a constrained
Boolean Game. Let us define maxG as p∈Π [c(p, t) + c(p, f)] + 1; the number
is greater than the maximum cost of any course of action in G. We lift the
cost
function c of a constrained Boolean game to assignments. We define c(ξ) =
p∈X c(p, ξ(p)) for an X-assignment ξ. Then, the utility of a Π-assignment
ξ for player i is defined as follows (cf. [11]) where γi is the goal of player i:
Gi (ξ) = −c(ξ|Πi ) otherwise. The utility
μ
Gi (ξ) = maxG − c(ξ|Πi ) if ξ |= γi ; and μ
G
function μ
i computes the utility independently of whether ξ is ϕ-consistent.
It is an auxiliary function and models, due to the term maxG , that a player
always prefers an assignment which satisfies its goal over one that does not.
Ultimately, we are interested in the worst case utility which is defined next. For
a Π-assignment
ξ the worst case utility of player i—note that ξ is not defined
on environmental variables—is defined by
min{
μGi (ξ ) | ξ ∈ AssΠ , ξ |Π = ξ, ξ |= ϕ} if ϕ[ξ] is satisfiable
G
μi (ξ) =
otherwise.
−c(ξ|Πi )
The worst case utility models the worst case assignment of the environmental
variables for player i, but it is assumed that the environmental variables respect