Mechanism Design for Argumentation

Mechanism Design for
Argumentation-based Persuasion
Xiuyi FAN, Francesca TONI
Department of Computing, Imperial College London,
South Kensington Campus, London SW7 2AZ, UK
Abstract. Recently we have seen a few development in argumentation-based dialogue systems, but there is less research in understanding agents’ strategic behaviour in dialogues. We study agent strategies by linking a specific form of
argumentation-based dialogues and mechanism design. Specifically, focusing on
persuasion dialogues, we show how dialogues can be mapped to concepts in mechanism design. We prove that a “truthful” and “thorough” dialogue strategy is a
dominant strategy under specific conditions. We also prove that a mechanism using
this dialogue strategy implements a “persuasion social choice function” we define.
These results show the validity of the proposed strategies for agents in persuasion
and the feasibility of studying persuasion with mechanism design techniques.
Keywords., Argumentation Dialogues, Mechanism Design
1. Introduction
In recent years, we have seen much research in the area of argumentation-based dialogues
[MP09,Pra06]. Most of this research focused on developing (argumentation-based) dialogue frameworks suitable for particular dialogue types, e.g. inquiry [BH09] and persuasion [Pra05]. Some research has been devoted to define generic dialogue frameworks
(e.g. [FT11]) focusing on the definition of the dialogue frameworks and their correctness in the context of some concrete argumentation framework (e.g. Assumption-Based
Argumentation (ABA) [DKT09] for [FT11]). Some effort has been spent also on understanding strategic behaviours of agents participating in these dialogues (e.g., [KMM04]).
In this paper, we approach argumentation-based dialogues from a mechanism design [Jac03] perspective to develop strategies for agents and study properties thereof.
We focus on persuasion dialogues, and build upon the generic (two-agent) ABA-based
dialogues of [FT11]. In these dialogues, information exchanged during dialogues is in
the form of rules, assumptions and contraries of assumptions, constituting the building
blocks of arguments and attacks between them in ABA.
According to Walton and Krabbe [WK95]: “Persuasion dialogue (critical discussion) always arises from a conflict of opinions, and its goal is to resolve the conflict,
to arrive at a final outcome of stable agreement in the end.” We model persuasion as
a dialogue starting with the persuader agent posing a topic, and then subsequently the
persuader and the persuadee agent putting forward information for and against the topic
(resp.). The persuasion is successful if the topic is “proved” through this dialogue. One
difficult aspect in persuasion is to prevent the persuader putting forward misleading information that does not hold in its knowledge base. We specify conditions under which
the persuader will not utter such information.
In mechanism design terms, we consider dialogues as strategies and information
disclosed in dialogues as actions. We define a set of strategy-move functions, extending
those of [FT12], to characterise information uttered by agents in dialogues.
The paper is organised as the follows. Section 2 and 3 give background and preliminary definitions, resp.. Section 4 gives our strategy-move functions. Section 5 links
persuasion dialogues with mechanism design and shows our main results. Section 6 discusses the most relevant related work. Section 7 concludes.
2. Background
Assumption-Based Argumentation (ABA) frameworks [DKT09] are tuples hL, R, A, Ci
where1
• hL, Ri is a deductive system, with L the language and R a set of rules of the
form β0 ← β1 , . . . , βm (m ≥ 0) with βi ∈ L, and, if m > 1, then βi 6= βj for
i 6= j, 1 ≤ i, j ≤ m;
• A ⊆ L is a (non-empty) set, referred to as assumptions;
• C is a total mapping from A into 2L − {{}}, where each β ∈ C(α) is a contrary
of α, for α ∈ A.
Given a rule ρ of the form β0 ← β1 , . . . , βm , β0 is referred to as the head (denoted
Head(ρ) = β0 ) and β1 , . . . , βm as the body (denoted Body(ρ) = {β1 , . . . , βm }). An
ABA framework is flat iff no assumption is the head of a rule.
In ABA, arguments are deductions of claims using rules and supported by sets of
assumptions, and attacks are directed at the assumptions in the support of arguments.
Informally, following [DKT09]:
• an argument for (the claim) β ∈ L supported by A ⊆ A (A ` β in short) is a
(finite) tree with nodes labelled by sentences in L or by τ 2 , the root labelled by
β, leaves either τ or assumptions in A, and non-leaves β 0 with, as children, the
elements of the body of some rule with head β 0 ;
• an argument A1 ` β1 attacks an argument A2 ` β2 iff β1 is a contrary of one of
the assumptions in A2 .
Attacks between (sets of) arguments correspond in ABA to attacks between sets
of assumptions, where a set of assumptions A attacks a set of assumptions A0 iff an
argument supported by a subset of A attacks an argument supported by a subset of A0 .
With argument and attack defined for a given F = hL, R, A, Ci, standard argumentation semantics can be applied in ABA [DKT09], e.g.:
• a set of assumptions is admissible (in F) iff it does not attack itself and it attacks
all A ⊆ A that attack it; grounded (in F) iff it the least set (wrt. ⊆) that is
admissible (in F) and contains all assumptions it defends, where A ⊆ A defends
α ∈ A iff A attacks all sets of assumptions that attack α; ideal (in F) iff it is the
largest admissible set contained in all maximally admissible sets (in F);
1 In standard ABA [DKT09] contrary maps to a single sentence. Our generalisation is equivalent to this.
Also, standard ABA does not require βi 6= βj , but this can be imposed with no loss of generality.
2τ ∈
/ L represents “true” and stands for the empty body of rules.
• an argument A ` β is admissible (grounded, ideal) (in F) supported by A0 ⊆ A
iff A ⊆ A0 and A0 is admissible (grounded, ideal, resp.) (in F); a sentence is
S-acceptable for S ∈ {admissible, grounded, ideal} (in F) iff it is the claim of
an argument that is S supported (in F) by some A ⊆ A.
ABA-dialogues [FT11] are conducted between two agents a1 and a2 that can be thought
of as being equipped with ABA frameworks hL, R1 , A1 , C1 i and hL, R2 , A2 , C2 i resp.,
sharing a common language L. An ABA-dialogue is made of utterances of the form
hai , aj , T, C, IDi (for i, j = 1, 2, i 6= j) where: C (the content) is one of: claim(β) for
some β ∈ L (a claim), rl(β0 ← β1 , . . . , βm ) for some β0 , . . . , βm ∈ L (a rule), asm(α)
for some α ∈ L (an assumption), ctr(α, β) for some α, β ∈ L (a contrary), a pass
sentence π ∈
/ L; ID ∈ N (the identifier); T ∈ N ∪ {0} (the target) such that T < ID.
Utterances with content other than π or claim(_) are termed regular utterances.3
An utterance hai , aj , T, C, IDi is from ai to aj . We also use P ASS to denote the set of
all utterances of the form h_, _, _, π, _i.
Given this notion of utterances, a dialogue Daaji (χ) (between agents ai and aj for
χ ∈ L), is a finite sequence δ = hu1 , . . . , un i, n ≥ 0, where each ul , l = 1, . . . , n,
is an utterance from ai or aj , u1 is an utterance from ai , and (1) the content of ul
is claim(χ) iff l = 1; (2) the target of pass and claim utterances in δ is 0; (3) for
every ui = h_, _, T, _, _i with i > 1 and T 6= 0, there is a non-pass utterance
uk = h_, _, _, C, T i for k < i; (4) no two consecutive utterances in δ are pass utterances,
other than possibly the last two utterances, un−1 and un . Intuitively, the identifier of
an utterance represents the position of the utterance in a dialogue, and its target is the
identifier of some earlier utterance in the dialogue. Daaji (χ) is referred to as complete if
its last two utterances are pass-utterances. Unless otherwise specified, dialogues in later
discussions are all complete. Given a dialogue δ = hu1 , . . . , un i and an utterance u,
δ ◦ u = hu1 , . . . , un , ui.
The framework drawn from dialogue δ = hu1 , . . . , un i is Fδ = hL, Rδ , Aδ , Cδ i where
• Rδ = {ρ|rl(ρ) is the content of some ui in δ};
• Aδ = {α|asm(α) is the content of some ui in δ};
• Cδ (α) = {β|ctr(α, β) is the content of some ui in δ}.
Restrictions can be imposed on dialogues so that they fulfil desirable properties, and
in particular that P1) the framework drawn from them is a flat ABA framework (i.e. with
no assumption in the head of rules and such that all assumptions have contraries), and
P2) utterances are related to their target utterances, where uj = h_, _, T, Cj , _i is related
to ui = h_, _, _, Ci , IDi iff T = ID and one of the following cases holds:
• Cj = rl(ρj ), Head(ρj ) = β and either Ci = rl(ρi ) with β ∈ Body(ρi ), or Ci =
ctr(_, β), or Ci = claim(β);
• Cj = asm(α) and either Ci = rl(ρ) with α ∈ Body(ρ), or Ci = ctr(_, α), or Ci
= claim(α);
• Cj = ctr(α, _) and Ci = asm(α).
Properties P1) and P2) above can be enforced using the notion of legal-move functions, which are mappings λ : D 7→ 2U (where D is the set of all possible dialogues and
U is the set of all possible utterances)4 such that, given δ = hu1 , . . . , un i ∈ D, for all
3 Throughout,
_ stands for an anonymous variable as in Prolog.
[FT11], legal-move functions have co-domain U instead of 2U . Our definition is a useful generalisation
indicating that there might be more than one utterance allowed by a legal-move function.
4 In
u ∈ λ(δ): δ ◦ u is a dialogue and if u = h_, _, T, C, _i, then there exists no i, 1 ≤ i ≤ n,
such that ui = h_, _, T, C, _i. We say that δ is compatible with λ. Thus, there is no repeated utterance to the same target in a dialogue compatible with a legal-move function.
We assume that dialogues in later discussions satisfy both P1 and P2. We will use Λ to
denote the set of all legal-move functions.
Mechanism Design (e.g. see [Jac03]) provides an abstraction of distributed problem
solving amongst interacting, self-interested agents. In the language of mechanism design,
agents are characterised by types, which are abstractions of their internal, private beliefs.
Given I ≥ 2 agents, the space of possible types for agent i (1 ≤ i ≤ I) is denoted by Θi
and its type is θi ∈ Θi . Moreover, Θ = Θ1 × . . . × ΘI .
Inter-agent interactions have a number of potential outcomes O. A given social
choice function/rule f : Θ → O characterises what can be deemed to be an optimal
outcome of the interaction for every vector of agent types.
Agents’ self-interest is dictated by their preferences over the outcomes, given their
type, expressed in terms of (private) utility functions ui : O × Θi → R. The public face
of agents is given by their actions, where Σi is the set of possible actions of agent i and
Σ = Σ1 × . . . × ΣI . The decision for agent i of which action to perform is given by a
strategy. Let Si denote the space of possible strategies for agent i, S = S1 × . . . × SI
and S−i denote S1 × . . . Si−1 × Si+1 × . . . × SI . Then a strategy si ∈ Si is a function
si : Θi × Σ → Σi . A strategy s = (s1 , . . . , si , . . . , sI ) is often represented as (si , s−i )
where s−i = (s1 , . . . , si−1 , si+1 , . . . , sI ).
Finally, a mechanism M = (Σ, g) consists of the action space Σ and an outcome
function/rule g : S → O, where g(s) is the outcome implemented by M for strategy s.
Since g(s) ∈ O, utility functions can be equivalently thought of as ui : S × Θi →
R (where ui (s, θi ) stands for ui (g(s), θi )). Also, as strategies determine actions, the
outcome function can be equivalently thought of as g : Σ → O, where g(σ) is the
outcome implemented by the mechanism for action σ.
A social choice function specifies the desired goal of an interaction amongst agents,
whereas a mechanism is a means of characterising the agents’ behaviour in the interaction. Several characterisations of strategies have been provided as ways to predict how
(rational) agents will behave in a mechanism. In particular, a strategy si is dominant
(for agent i) if it maximises the agent’s utility irrespectively of the other agents’ strategies: ∀s−i ∈ S−i , ∀s0i ∈ Si [ui ((si , s−i ), θi ) ≥ ui ((s0i , s−i ), θi )]. For a mechanism
M = (Σ, g) and a social choice function f , M implements f iff g(s) = f (θ), where s
is a dominant strategy.
3. Preliminaries 5
We use the term frameworks to describe tuples of the form hL, R, A, Ci but where C is
a mapping from A into 2L . ABA frameworks are frameworks but not vice versa. We
refer to the set of all frameworks of the form hL, R, A, Ci as AF(L). Frameworks can
be combined as follows:
Definition 3.1. [FT12] Given frameworks F = hL, R, A, Ci and F 0 = hL, R0 , A0 , C 0 i,
the joint framework (of F and F 0 ) is FJ = F ] F 0 = hL, R ∪ R0 , A ∪ A0 , CJ i, where
CJ (α) = C(α) ∪ C 0 (α), for all α in A ∪ A0 .6 Given frameworks FJ and F, F is a
5 Some
6 We
of the contents in this and next section also appears in [FT12], as referenced.
assume that C(α) = {} if α 6∈ A. Similarly for C 0 .
sub-framework of FJ , written F v FJ , iff there exists F 0 such that F ] F 0 = FJ .
Given a framework F = hL, R, A, Ci, and C either a rule, an assumption, or a
contrary mapping for assumptions, we say that C is in F iff C ∈ R (if C is a rule), or
C ∈ A (if C is an assumption), or C(α) ∈ C(α) (if C is a contrary), for some α ∈ A.
Upon computing the S-acceptability of a sentence s (for S ∈ {admissible, grounded,
ideal}) in a framework F = hL, R, A, Ci, if there is α ∈ A such that C(α) = {}
(hence F is not an ABA framework), then, for all such α, we let C(α) = {new}, where
new ∈
/ L, and then replace L with L ∪ {new}.
Agents have private beliefs in some internal representation. However, when they
interact within dialogues they exchange information in a shared language. Following
[FT11,FT12], we assume that this language is that of ABA, namely agents exchange
rules, assumptions and their contraries, expressed in some shared logical language L.
Thus, agents can be thought of as being equipped with ABA frameworks. We will often
use the ABA framework an agent is equipped with to denote the agent itself. We will
focus on the case of two agents, a1 = hL, R1 , A1 , C1 i and a2 = hL, R2 , A2 , C2 i. We
will assume that a1 , a2 and a1 ] a2 are flat, in line with [DKT09]. Intuitively, a1 ] a2
amounts to the beliefs that the two agents hold collectively.
When studying dialogues, we will restrict attention to the agents’ beliefs that are
directly related to that topic, as follows.
Definition 3.2. [FT12] Y is directly related to X wrt. a framework hL, R, A, Ci iff:
X is an assumption α ∈ A and Y is C(α) = _; or
X is a sentence β ∈ L \ A and Y is a rule β ← _ ∈ R; or
X is a rule β0 ← β1 , . . . , βn ∈ R with n ≥ 1 and Y is
either a rule βi ← _ ∈ R, if βi 6∈ A, or an assumption βi ∈ A; or
X is C(_) = B and Y is either a rule β ← _ ∈ R, for β ∈ B, or an α ∈ B ∩ A.
We implement the dialogue model described in [FT11] with a specialisation in the
ID field of utterances in a dialogue. Given a dialogue hu1 , . . . , un i, the sign of the ID
of u1 (the claim) is odd; the parity of the sign of id in ui = h_, _, t, C, idi (1 ≤ i ≤ n)
is the same as the sign of t, if C is rl(_) or asm(_); or the opposite to the sign of t, if C
is ctr(_, _). Loosely speaking, this specialisation enforces that if an utterance u supports
the claim, then its ID is odd (we say that u is odd); otherwise, its ID is even (and we
say that u is even). For any u and u0 , if u is odd and u0 is even, we say u and u0 are of the
opposite type. We assume all dialogues in the rest of this paper follow this specialisation.
We will adopt the following notation: U i stands for all utterances from ai in U,
namely of the form hai , _, _, _, _i, and, for any utterance hai , _, _, _, _i = u, we also say
u is made by ai (to aj ). Moreover, given δ ∈ D, we use Uδi to denote the set of regular
utterances made by ai in δ. With an abuse of notation, we say that a content C (a rule, an
assumption, or a contrary) is in Uδi iff C is the content of some u ∈ Uδi .
4. Strategy-move Functions
In ABA dialogues, agents make utterances that contain rules, assumptions and contraries.
The selection of utterances must satisfy the integrity of the dialogue and the aims of
the agents. Legal-move functions [FT11] are used to keep this integrity. Strategy-move
functions (see also [FT12]) can be used to specify agents’ behaviours that are suitable
for their aims and for the aims of the dialogues they are engaged in.
Definition 4.1. [FT12] A strategy-move function for agent ai (i = 1, 2) is a mapping
i
φ : D × Λ 7→ 2U , such that, given λ ∈ Λ and δ ∈ D: φ(δ, λ) ⊆ λ(δ).
Given a dialogue Daaji (χ) = δ = hu1 , . . . , un i compatible with a legal-move
function λ and a strategy-move function φ for ak (k = 1, 2), if, for all um =
hak , _, _, _, _i, 1 < m ≤ n, um ∈ φ(hu1 , . . . , um−1 i, λ), then we say that δ is constructed with φ wrt. ak and ak uses φ in δ.
We use Φ to denote the set of all strategy-move functions.
In the remainder of this section we define a number of strategy-move functions that
we will then use for persuasion. The first strategy-move function characterises the “truthfulness” of agents. If a dialogue is constructed with a truthful strategy-move function wrt.
ak , then ak only utters rules, assumptions, and contraries from its ABA framework.
Definition 4.2. [FT12] A truthful strategy-move function φ ∈ Φ for agent ak (k ∈
{1, 2}) is such that, given a dialogue δ ∈ D and a legal-move function λ ∈ Λ, for all
u ∈ φ(δ, λ) made by ak , the content C of u is such that: if C = rl(ρ), then ρ ∈ Rk ,
if C = asm(α), then α ∈ Ak , if C = ctr(β, β 0 ), then β 0 ∈ Ck (β). With an abuse of
notation, we refer to a generic truthful strategy-move function as φt .
The second strategy-move function we define characterises the “completeness” of
an agent’s utterances: the thorough strategy-move function specifies that agents must not
utter π if there is any other “truthful” utterance allowed by the legal-move function.
Definition 4.3. [FT12] A thorough strategy-move function φ ∈ Φ for agent ak (k ∈
{1, 2}) is such that, given δ ∈ D such that δ is constructed with a truthful strategy-move
function wrt. ak , given λ ∈ Λ, for all u ∈ φ(δ, λ) made by ak , if u is a pass-utterance
then there exists no regular utterance u ∈ λ(δ) ∩ U k such that δ ◦ u is constructed with a
truthful strategy-move function. With an abuse of notation, we refer to a generic thorough
strategy-move function as φh .
We refine the notion of thorough strategy-move to give two further strategy-move
functions that represent the “proponent” and “opponent” roles in a dialogue. The proponent strategy-move function defines agents that only utter utterances that support or
defend the claim (odd utterances); whereas the opponent strategy-move function defines
agents that only utter utterances that attack the claim or some of its defences (even utterances). Both strategy move functions also allow agents to pass.
Definition 4.4. A proponent strategy-move function φ ∈ Φ for agent ak (k ∈ {1, 2}) is
such that, given δ ∈ D such that δ is constructed with a thorough strategy-move function
φh wrt. ak , given λ ∈ Λ,
• if φh (δ, λ) ⊆ P ASS, then φ(δ, λ) ⊆ P ASS;
• otherwise, let S = {X|X ∈ φh (δ, λ), X is odd}:
if S = {}, then φ(δ, λ) ⊆ P ASS, otherwise, φ(δ, λ) = S.
We refer to a generic proponent strategy-move function as φp .
Definition 4.5. An opponent strategy-move function φ ∈ Φ for agent ak (k ∈ {1, 2}) is
such that, given δ ∈ D such that δ is constructed with a thorough strategy-move function
φh wrt. ak , given λ ∈ Λ,
• if φh (δ, λ) ⊆ P ASS, then φ(δ, λ) ⊆ P ASS;
• otherwise, let S = {X|X ∈ φh (δ, λ), X is even}:
if S = {}, then φ(δ, λ) ⊆ P ASS, otherwise, φ(δ, λ) = S.
We refer to a generic opponent strategy-move function as φo .
5. Persuasion
We model persuasion as follows: the persuader/proponent (a1 ) utters arguments that support the topic (χ) whereas the persuadee/opponent (a2 ) attacks the persuader by uttering counter arguments. We equate persuasion to S-acceptability of the topic in the ABA
framework drawn from a dialogue as follows:
Definition 5.1. Given a dialogue δ = Daa21 (χ), a2 is persuaded (by a1 ) iff χ is Sacceptable in Fδ , for S ∈ {admissible, grounded, ideal}. We use PERSUADED and
NOT_PERSUADED to denote that a2 is persuaded or not, resp., given a dialogue.
We link persuasion and mechanism design as follows. We define the types of agents
as their ABA frameworks:
Definition 5.2. The types of for agents a1 , a2 are θ1 = a1 and θ2 = a2 .
In ABA-dialogues, agents interact by putting forward rules, assumptions and contraries about a topic, conveyed as contents of utterances. We hence view frameworks as
actions and dialogues as strategies, in mechanism design terms, as follows:
Definition 5.3. The action spaces for agents a1 , a2 are Σ1 = Σ2 = AF(L) (resp.).
Definition 5.4. Given a dialogue Daaji (χ) = δ, the dialogue strategy sδk for ak (i, j, k =
1, 2, i 6= j) wrt. δ is such that, given the framework Fδ drawn from δ, sδk (θk , Fδ ) = σk
where σk = hL, Rσk , Aσk , Cσk i and:
• Rσk = {ρ|hak , _, _, rl(ρ), _i is in δ},
• Aσk = {α|hak , _, _, asm(α), _i is in δ},
• Cσk is such that, for every α ∈ Aσk , Cσk (α) = {β|hak , _, _, ctr(α, β), _i is in δ}.
We say that δ is the dialogue of sδ = (sδ1 , sδ2 ).
Note that the framework drawn from a dialogue is equal to the joint framework of
σ1 , σ2 given by the strategies of the two agents, i.e. Fδ = σ1 ] σ2 .
Since a framework can be drawn from a dialogue, we define the outcomes of a
persuasion dialogue as the ABA framework drawn from the dialogue, as follows:
Definition 5.5. The outcomes are O = {F|F ∈ AF(L) and F= Fδ for some δ ∈ D}.
To define the utility functions, we first define the payment of utterances and dialogues. We then carry the payment of a dialogue to the ABA framework drawn from the
dialogue. Hence the payment of agent actions are linked to the outcome.
We consider the payment of an utterance as the “cost” to the agent that makes the
utterance. The payment is 0 if the utterance is considered honest by the other agent; and
positive if it is considered as a lie. We treat the payment probabilistically such that the
payment is computed as the product of the probability of the other agent believing the
utterance is a lie and the “damage” of such a lie.
Definition 5.6. Let i, j = 1, 2, j 6= i.
Given an utterance u ∈ U i , the payment of u wrt. ai is Tiu = dui · pui where dui ≥ 0
is the damage to agent ai if u is considered a lie by aj ; and 0 ≤ pui ≤ 1 is the probability
of aj considering u a lie. Given an utterance u ∈ U j , the payment of u wrt. ai is Tiu = 0.
Given a dialogue δ = hu1 , . . . , un i between ai and aj , let T = {Tiuk |uk is in δ and
uk
Ti > 0}. Then the payment of δ for ai is: Tiδ = min(T ) if T 6= {}, and Tiδ = 0
otherwise7 . Moreover, the payment of Fδ for ai is Tiδ .
We do not address here the problem of determining the damage and probability
values and assume that these are given. As it will become clear below, we use min(T ) =
Tiδ to ensure that the payment is higher than the reward, should an agent lie (even the
slightest lie would result in a negative utility).
With payments defined, we define the utility functions for agents in persuasion:
Definition 5.7. Let Wiχ ≥ 0 be the reward of topic χ for agent ai , for i = 1, 2. Let Fδ
be the ABA framework drawn from a dialogue δ. Then, the utility functions u1 , u2 for
a1 , a2 resp. are defined as:
• if PERSUADED:
∗ u1 (Fδ , θ1 ) = W1χ − T1δ , ∗ u2 (Fδ , θ2 ) = 0,
• if NOT_PERSUADED: ∗ u1 (Fδ , θ1 ) = 0,
∗ u2 (Fδ , θ2 ) = W2χ − T2δ .
Since we always associate topics with dialogues, we will also say that the reward of
dialogue δ for ai is Wiχ if Wiχ is the reward for the topic of δ.
The outcome function maps agent actions to outcomes as follows.
Definition 5.8. The outcome function for σ1 ∈ Σ1 , σ2 ∈ Σ2 is: gp (σ1 , σ2 ) = σ1 ] σ2 .
We show that the dialogue strategy sδ is a dominant strategy for agents in persuasion
under the following two conditions: (1) introducing rules, assumptions, and contraries
that are not in an agent’s ABA framework in the dialogue makes the payment higher than
the reward of the dialogue; and uttering rules, assumptions and contraries in an agent’s
ABA framework ensures that the payment is lower than the reward of the dialogue; (2)
there is no overlap in content between utterances from ai and aj .
Theorem 5.1. Given Daa21 (χ) = δ, if δ is constructed with φp for a1 and with φo for a2 ,
then the dialogue strategy sδ is dominant under the conditions that:
1. • for all C in ai (i = 1, 2), if u = hai , aj , _, C, _i is in δ then Tiu < Wiχ ;
• for all C not in ai (i = 1, 2), if u = hai , aj , _, C, _i is in δ then Tiu > Wiχ ;
2. for all C1 in Uδ1 and C2 in Uδ2 , C1 6= C2 .
Proof. Let σ1 , σ2 be actions given by sδ . To show that sδ is dominant is to show
that there is no other strategy which gives actions σ10 and σ20 s.t. ui (g(σ10 , σ20 ), θi ) >
ui (g(σ1 , σ2 ), θi ), i ∈ {1, 2}. We show this by examining properties of φp and φo .
Firstly, we show that truthfulness yields better utility than introducing lies in a dialogue. Since a1 uses φp and a2 uses φo , σi v ai . Hence, by condition (1), ui ≥ 0. Also
by condition (1), for any σi0 with content not in ai , ui (gp (σ10 , σ20 ), θi ) ≤ 0. Hence, any
strategy that gives σi with contents not in ai is no better than sδ .
7 For
S = {x1 , . . . , xm } ⊆ R, if l = min(S), then l ∈ S and @k 6= l, k ∈ S s.t. k < l.
Secondly, we show that disclosing information that is in ai but not in σi produces no
higher utility, as a1 utters odd utterances and a2 utters even utterances that form σ1 and
σ2 , resp.. Since the contents of odd utterances are in arguments that support/defend the
claim, whereas the content of even utterances are in arguments that attack the claim/its
defences, neither agent gains higher utility by uttering utterances of the opposite type.
Thirdly, we show that disclosing less information allowed by φp and φo produces
no higher utility for a1 and a2 , resp.. By condition (2), there is no utterance by ai with
content that can be used in arguments uttered by aj . Hence, disclosing less information
than allowed by φp and φo yields no higher utility.
We define the social choice function for persuasion as follows:
Definition 5.9. The persuasion social choice function is: fp (θ1 , θ2 ) = F defined inductively by:
• F0 is the framework in AF(L) with empty sets of rules and assumptions;
P
P
• F1 = F1P ] F1O , where F1P = hL, RP
1 , A1 , C1 i (the proponent sub-framework
O
O
O
O
of F) and F1 = hL, R1 , A1 , C1 i (the opponent sub-framework of F), s.t.:
∗
∗
∗
∗
RP
1 = {X|X ∈ R1 , X is directly related to χ},
AP
1 = {X|X ∈ A1 , X is directly related to χ},
P
C1P is such that, for any α ∈ AP
1 , C1 (α) = {};
O
O
O
O
R1 = {}, A1 = {}, C1 is such that, for any α ∈ AO
1 , C1 (α) = {};
P
O
P
P
P
• Given Fi = FiP ]FiO , Fi+1 = Fi+1
]Fi+1
, where Fi+1
= hL, RP
i+1 , Ai+1 , Ci+1 i
O
O
O
and Fi+1
= hL, RO
,
A
,
C
i,
s.t.:
i+1
i+1 i+1
∗
∗
∗
∗
∗
∗
P
P 8
RP
i+1 = {X|X ∈ R1 , X is directly related to Ri ∪ Ci } ,
P
P
P
Ai+1 = {X|X ∈ A1 , X is directly related to Ri ∪ Ci },
P
P
O
Ci+1
is such that, for any α ∈ AP
i+1 , Ci+1 (α) = {β|α ∈ Ai and β ∈ C1 (α)},
O
O
O
Ri+1 = {X|X ∈ R2 , X is directly related to Ri ∪ Ci },
O
O
AO
i+1 = {X|X ∈ A2 , X is directly related to Ri ∪ Ci },
O
O
O
Ci+1 is such that, for any α ∈ Ai+1 , Ci+1 (α) = {β|α ∈ AP
i and β ∈ C2 (α)}.
The persuasion social choice function fp constructs F. Roughly speaking, F is defined so that it contains rules, assumptions, and contraries that support the claim in the
persuader and rules, assumptions, and contraries that attack the claim in the persuadee.
Given the construction of such F, the following theorem holds.
Theorem 5.2. Under conditions (1) and (2) specified in theorem 5.1, the mechanism
M = (Σ, sδ ) implements the persuasion social choice function fp .
Proof. Since sδ is a dominant strategy (under conditions (1) and (2)), to show that M
implements fp is to show gp (σ1 , σ2 ) = fp (θ1 , θ2 ), i.e., F = fp (θ1 , θ2 ) = σ1 ] σ2 ,
namely rule, assumption and contrary in F is also in σ1 ] σ2 and vice versa. This is
the case as, by the definitions of φp and φo , the set of utterances made by a1 constitutes
the proponent sub-framework of F and the set of utterances made by a2 constitutes the
opponent sub-framework of F. Hence, F = σ1 ] σ2 .
8 With an abuse of notation, we say that X is directly related to S where X is a rule, an assumption, or a
contrary and S is a set of rules, assumptions and contraries, iff X is directly related to some s such that s ∈ S.
Mechanism Design Concepts
(i ∈ {1, . . . , I})
Persuasion Dialogue (I = 2)
Type space (of agent i)
Outcomes
Utility function (for agent i)
Action space
Strategy (of agent i)
Social choice function
Θi
O
ui : O × θi → R
Σ = Σ1 × . . . × ΣI
si : Θi × Σ → Σi
f : Θ1 × . . . × ΘI → O
hL, Ri , Ai , Ci i
definition 5.1
definition 5.7
AF(L) (definition 5.3)
sδ (definition 5.4)
fp (definition 5.9)
Table 1. Dialogue as mechanism
We summarise the connection between mechanism design and dialogues in table 1. To
illustrate our proposal, let a1 and a2 be as follows:
•
•
•
•
•
•
R1 = {χ ← q1 ; χ ← q2 ; q1 ← α1 ; q2 ← α2 ; c2 ←; c3 ←},
A1 = {α1 ; α2 },
C1 is: C1 (α1 ) = {c1 }; C1 (α2 ) = {c2 },
R2 = {},
A2 = {α1 , α2 , α3 },
C2 is: C2 (α1 ) = {α3 }; C2 (α2 ) = {c2 }; C2 (α3 ) = {c3 }.
A persuasion dialogue may proceed as shown in table 2. The ABA framework drawn
from the dialogue, Fδ , is the following:
• Rδ = {χ ← q1 ; q1 ← α1 ; χ ← q2 ; q2 ← α2 }
• Aδ = {α1 , α2 , α3 }
• Cδ is: Cδ (α1 ) = {α3 }; Cδ (α2 ) = {c2 }.
Clearly, Fδ v a1 ] a2 so both a1 and a2 have uttered information from their ABA
frameworks. Since they have also hidden parts of their knowledge base, Fδ 6= a1 ] a2 .
Table 2. A persuasion dialogue example.
a1 :
ha1 , a2 , 0, claim(χ), 1i
ha1 , a2 , 1, rl(χ ← q1 ), 3i
ha1 , a2 , 3, rl(q1 ← α1 ), 5i
ha1 , a2 , 5, asm(a1 ), 7i
a2 :
ha2 , a1 , 7, ctr(α1 , α3 ), 8i
ha2 , a1 , 8, asm(α3 ), 10i
ha1 , a2 , 1, rl(χ ← q2 ), 11i
ha1 , a2 , 11, rl(q2 ← α2 ), 13i
ha1 , a2 , 13, asm(α2 ), 15i
ha2 , a1 , 15, ctr(α2 , c2 ), 16i
ha2 , a1 , 0, π, 18i
ha1 , a2 , 0, π, 19i
6. Related Work
Argumentation dialogues have been studied by various researchers (e.g, see [MP09,
Pra06]). The dialogue model of this work uses elements from the model presented in
[FT11]. However, the latter focuses on presenting the dialogue model and proving its
soundness wrt. some argumentation semantics, whereas this work, from a game-theoretic
perspective, studies strategic behaviour of agents participating in persuasion dialogues.
[Pra06] and [BH09] present a dialogue model for persuasion and inquiry, resp..
However, neither of the two research studied dialogue from a game-theoretic perspective.
[AM02] present an early work on studying agent strategies in persuasion dialogues.
Their approach derives dialogue strategies from pre-defined agent profiles, e.g., agreeable agent that accepts everything, argumentative agent that challenge everything, etc.
They have not linked dialogue results with agents’ internal beliefs.
[KMM04] present a study on strategies used in agent interactions. However, the
strategies in [KMM04] are more like rules than describing agent profiles. Also, the proposal of [KMM04] is not concerned with dialogues, and hence is orthogonal to our work.
[BA11] present a study on dialogue systems that support deliberation dialogues.
Their underlying argumentation framework is the instantiated value-based argumentation framework, hence their dialogue model and results are concerned with agents with
preferences. Their system relies upon agents estimating their counterparts’ preferences
and does not study strategies using mechanism design.
Introducing mechanism design approach in argumentation research is a relatively
new phenomenon. [RL11] present a few examples in logical mechanism design. However, the main point of this work is to demonstrate the the feasibility of introducing mechanism design as a tool in the design of logical inference procedures, whereas our paper focuses on directly applying mechanism design in a particular type of argumentation
based dialogue, for persuasion.
[RLT09] and [PLR10] have introduced the Argumentation Mechanism Design as a
paradigm for studying argumentation using game-theoretic techniques. Those two papers
have shown results that: for the purpose of having more arguments accepted wrt. various
semantics, agents would disclose all their arguments iff no argument in one agent’s argumentation framework attacks any other argument in this agent’s argumentation framework, directly or indirectly. Unlike those works, our work has focused on argumentation
dialogues. Furthermore, [RLT09] and [PLR10] used the abstract argumentation (AA)
framework defined in [Dun95] whereas our work is based on ABA.
[Cam09] presents a study of three different types of dishonesty: lie, BS, and deception, where lie is uttering information that is directly inconsistent with one agent’s
knowledge base; BS is uttering information that are made up, i.e., not necessarily inconsistent with the agent’s knowledge base, but does not exist in the agent’s knowledge base;
and deception is hiding information. Our definition of lying captures the first two types
of dishonesty. It can be viewed that our truthful strategy-move function rules out lie and
BS. We view that deception can be purposefully allowed in persuasion.
[KS03] present a study on collaborative agent behaviours for resource sharing.
Though this study involves game theoretic aspects, it is not linked to argumentation nor
formal argumentation-based dialogues. Rather, it presents a study of the resource sharing
problem with a game theory based approach with specific constraints.
7. Conclusions
We have studied strategies that agents may use in persuasion dialogues. Building upon
work on an existing dialogue framework, we define strategy-move functions that describe
suitable utterances for agents in persuasion. We have brought mechanism design tech-
niques into argumentation-based dialogues by mapping dialogue components into various mechanism design results. We have proved that, under specified conditions, neither
the persuader agent nor the persuadee agent will lie in dialogues. We have also defined a
persuasion social choice function and proved that the dialogue strategy implements it.
As an initial work in introducing mechanism design in argumentation-based dialogues, the two main contributions of this work are: (1) mapping various mechanism design concepts into argumentation and argumentation-based dialogues; and (2) showing
that a dialogue mechanism for persuasion has desired properties under specific conditions. Future work include studying other strategies for persuasion and studying mechanism design properties for other types of dialogues, e.g., deliberation and negotiation.
Acknowledgements
This research has been partially supported by the UK EPSRC grant EP/J020915/1.
References
[AM02]
L. Amgoud and N. Maudet. Strategical considerations for argumentative agents (preliminary
report). In Proceedings of the 9th International Workshop on Non-Monotonic Reasoning (NMR),
pages 409–417, Toulouse, 2002.
[BA11]
E. Black and K. Atkinson. Choosing persuasive arguments for action. In The 10th International
Conference on Autonomous Agents and Multiagent Systems, 2011.
[BH09]
E. Black and A. Hunter. An inquiry dialogue system. JAAMAS, 19:173–209, 2009.
[Cam09]
M Caminada. Truth, lies and bullshit; distinguishing classes of dishonesty. In Proceedings of the
IJCAI workshop on Social Simulation, pages 39–50, 2009.
[DKT09] P. M. Dung, R. A. Kowalski, and F. Toni. Assumption-based argumentation. In Argumentation in
AI, pages 25–44. Springer, 2009.
[Dun95]
P. M. Dung. On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial Intelligence, 77(2):321–357, 1995.
[FT11]
X. Fan and F. Toni. Assumption-based argumentation dialogues. In Proc. IJCAI, 2011.
[FT12]
X. Fan and F. Toni. Agent strategies for aba-based information-seeking and inquiry dialogues. In
Proc. ECAI, 2012.
[Jac03]
M Jackson. Mechanism theory. In U Derigs, editor, Optimization and Operations Research.
EOLSS Publishers, Oxford UK, 2003.
[KMM04] Antonis Kakas, Nicolas Maudet, and Pavlos Moraitis. Layered strategies and protocols for
argumentation-based agent interaction. In Proceedings of the First international conference on
Argumentation in Multi-Agent Systems, pages 64–77. Springer, 2004.
[KS03]
Sarit Kraus and Orna Schechter. Strategic-negotiation for sharing a resource between two agents.
Computational Intelligence, 19:2003, 2003.
[MP09]
P. McBurney and S. Parsons. Dialogue games for agent argumentation. In Argumentation in AI,
pages 261–280. Springer, 2009.
[PLR10]
S. Pan, K. Larson, and I. Rahwan. Argumentation mechanism design for preferred semantics. In
COMMA, pages 403–414, 2010.
[Pra05]
H. Prakken. Coherence and flexibility in dialogue games for argumentation. JLC, 15:1009–1040,
2005.
[Pra06]
H. Prakken. Formal systems for persuasion dialogue. Knowledge Eng. Review, 21(2):163–188,
2006.
[RL11]
I Rahwan and K Larson. Logical mechanism design. Knowledge Eng. Review, 26(1):61–69, 2011.
[RLT09]
I. Rahwan, K. Larson, and F. A. Tohmé. A characterisation of strategy-proofness for grounded
argumentation semantics. In Proc. IJCAI, pages 251–256, 2009.
[WK95]
D. Walton and E. Krabbe. Commitment in Dialogue: Basic concept of interpersonal reasoning.
State University of New York Press, 1995.