What is Argumentation

Overview
Introduction
Agents and Arguments
Abstract Argumentation Frameworks
Semantics for reasoning in the presence of incomplete, uncertain and
conflicting information
Sanjay Modgil1 and Leila Amgoud2
Argumentation Based Agent Reasoning
Reasoning about beliefs, goals and actions
1
Argumentation Based Dialogues
Agents and Intelligent Systems Group, Kings College London, UK
Distributed argumentation based reasoning for persuasion,
negotiation,
2 French National Research Agency, Toulouse, France
Argumentation theory
Introduction
Role of argumentation in natural human reasoning and
dialogue studied in philosophy, linguistics, psychology,
communication studies
More recent logical models proposed for modelling nonmonotonic (common-sense) reasoning and communication
in the presence of uncertain, incomplete and conflicting
information
Appeal resides in development of tractable technologies
and intuitive modular nature of argumentation, which is
akin to the way humans reason
What is Argumentation ?
What is an Argument ?
A set of premises offered in support of a conclusion / claim
Information I about Tony should be published
because
the claim
the premises
Tony has political responsibilities
and
A1
The process whereby arguments are constructed, exchanged and
evaluated in light of their interactions with other arguments.
I is in the national interest
x
x
A1 (publish info about Tony because political responsibilities
)
A2 (Tony does not have political responsibilities because
Tony resigned from parliament and if a person resigns )
and
if a person has political responsibilities and info about
that
person is in the national interest then that info should be published
A3 (Tony does have political responsibilities because he is
now middle east envoy and if a person is )
Agents and Arguments
Agents and Arguments
Negotiation dialogue between a buyer and seller of cars:
1) arguments = proofs of claims in underlying logic
2) interactions between arguments are defined
3) Evaluate winning arguments in argument graph
- Seller (Offer: Renault)
- Buyer (Reject: Renault)
- Seller (Why?)
- Buyer (Argue: Renault is French, and French cars are unsafe)
- Seller (Argue: Renaults are not unsafe as have been awarded safest car in
Europe by EU)
- Buyer (Accept)
claims represent beliefs, goals and actions
Inherently dialectical nature of argumentation theory
provide principled ways in which to structure exchange
of, and reasoning about arguments for proposals /
statements between human / automated agents
(e.g. in persuasion, negotiation, deliberation dialogues)
Exchange of arguments provides for agreements that would
not be reached in simple handshaking protocols
Dung s Abstract Argumentation
framework *
Abstract Argumentation
Frameworks
AF = (Args, Attack) where Attack Args Args
Calculus of opposition applied to determine winning
arguments
x
A1
A2
(publish)
A3
(not political)
(political)
(Args, Attack) abstracts from underlying logic based
definition of Args and Attack
Args = proofs of conclusions (claims)
Attack = logic specific definition of conflict
* P. M. Dung. On the acceptability of arguments and its fundamental role in nonmonotonic reasoning,
logic programming and n-person games. Artificial Intelligence, 77:321 357, 1995
Propositional Classical Logic Example
Rule based example
L. Amgoud and C. Cayrol. A reasoning model based on the production of acceptable arguments.
Annals of Mathematics and Artificial Intelligence, 34(1-3):197 215, 2002.
is a set of propositional classical logic formulae
Args = { (H,h) | H is consistent,
H h,
H is minimal}
arguments built from sequences of rules with
literals, strong negation, and negation as failure
rebut attack
(H1,h1) and (H2,h2) rebut attack each other iff h1
h2
(H1,h1) undercut attacks (H2,h2) iff h1
h for some h H2
? [
b, b
c]
[
d, d
c]
x
?
undercut attack (type 1)
= {nat, pol, nat
( {nat, pol, nat
pol
pol
pub,
res, res
pol, mid, mid
pol }
x [ not e
pub} , pub )
[d
c] ]
undercut attack (type 2)
( { res, res
pol} :
pol )
( {mid, mid
pol } : pol )
[
e]
Back to Abstract Argumentation
Frameworks
Dung s Calculus of Opposition
AF = (Args, Attack)
x
A
B
S=
A
C defends A
C
C
B
Let AF = (Args, Attack) and S
A
Args
Args is acceptable w.r.t. S iff
What are the justified / rejected / undecided arguments ?
Dung s Acceptability Semantics
- S is an admissible extension iff every argument in S is acceptable w.r.t. S
- S is a complete extension iff every argument acceptable w.r.t. S is in S
- S is the grounded extension iff it is the smallest complete extension
Args s.t. (B,A)
S s.t. (C,B)
Example
forecast hot
Intuitively, in defending an argument A, any C that I use to defend A against
an attack by some B must not conflict with A and must itself be defendable
against any attack
Let AF = (Args, Attack) and let S be a conflict free subset of Args (no two
arguments in S attack each other). Then:
B
C
A
Attack
Attack
if not forecast hot
and not forecast cold
then plan for run
C
forecast cold B
, {A}, {A,D}, {B,D} are admissible
{C} and {A,C} are not admissible
D
if run planned
then cancel
barbecue
{A} is not complete
, {A,D}, {B,D} are complete extensions
is the smallest complete (grounded) extension
- S is a preferred extension iff it is a maximal complete extension
- S is a stable extension iff every argument not in S is attacked by an argument in S
Argumentation Semantics for Non-monotonic
reasoning in the presence of uncertainty and
conflict *
Example
X is a credulously, respectively sceptically, justified argument
under the semantics E if X is in at least one, respectively all, E
extensions
X is a rejected argument under the semantics E if X is not in any
E extension
X is an undecided argument under the semantics E if X is in at
least one, but not all E extensions
A
C
B
D
{A,D} and {B,D} are the preferred (and stable) extensions
{A,D} and {B,D} are preferred and so A,B,D are
credulously justified and A is sceptically justified
is grounded and so no argument is sceptically
justified
Abstract (Args,Attack) where Args and Attack defined by some
underlying theory in logic
iff
is the claim of a justified argument in Args
What accounts for the correctness of an inference is that it prevails
in the face of opposing inferences
Dung semantics = logic neutral rational means for establishing such
standards of correctness (characterise underlying principles of commonsense
reasoning)
Non-monotonic formalisms - logic programming, default logic, auto-epistemic
logic, certain forms of circumscription all shown to conform to Dung s
semantics*
* A. Bondarenko, P.M. Dung, R.A. Kowalski, and F. Toni. An abstract, argumentation-theoretic approach to
default reasoning. Artificial Intelligence, 93:63 101, 1997.
Logic Programming as
Argumentation
Default Logic as Argumentation
Default Logic theory = (D,W) where D a set of defaults, W
background theory containing certain knowledge
default: pre(d) : just(d) / cons(d)
arguments of the form: (d1, ,dn) where for each di (1
holds that {cons(d1), ,cons(di-1)} W pre(di)
(d1, ,dn) attacks (d 1, ,d m) iff there is some d I (1
that {cons(d1), ,cons(dn)} W
jus(d i)
i
i
n) it
m) such
Arguments: trees constructed with rules. The children of a
rule
c
a1, ,an, not b1, ,not bm
are rules with heads a1, ,an
An argument A attacks an argument B iff A contains a rule
with c as its head and B contains a rule with not c in its
body
stable semantics
stable semantics ( stable model semantics )
grounded semantics ( well founded semantics )
Preference based Argumentation
Frameworks*
Argument Game Proof Theories*
Is argument A in the E extension?
Proponent of A defends against opponent of A where each
player moves arguments that attack their counterpart s
arguments
Different rules on the legality of moves for different
semantics
Player wins iff the other player cannot move
E.g. when showing that A is in the grounded extension
proponent cannot repeat an argument he has already
moved
- A is in the grounded extension iff proponent wins every
possible line of dispute
* ASPIC. Deliverable D2.1: Theoretical frameworks for argumentation
http://www.argumentation.org/ public deliverables.htm. June, 2004.
Evaluating argument Strength
A > B if A is stronger than B according to
Temporal principle
Specificity principle
Last link principle (last rule used to derive claim of A
prioritised higher than last rule used to derive claim of A)
Weakest link principle (lowest prioritised formula in A
prioritised higher than lowest prioritised formula in B)
Depth of argument
Value promoted by argument
:
Preference relation required to determine a unique set of justified arguments
A
B {A} and {B} preferred, grounded - no arg sceptically justified
PAF = (Args,Attack, ) where is a partial ordering on Args and
(A,B) Defeat iff (A,B) Attack and it is not the case that B > A
(Args, Defeat)
(Args, Attack, A3 > A2)
A1
(publish)
A2
(not political)
{A3,A1}, {A2} preferred,
A3
(political)
A1
A2
A3
{A3,A1} preferred and grounded
grounded
* L. Amgoud and C. Cayrol. A reasoning model based on the production of acceptable arguments.
Annals of Mathematics and Artificial Intelligence, 34(1-3):197 215, 2002.
Arguments and Values
T. Bench Capon s Value based Argumentation Frameworks*
(Args,Attack,Values,val,Aud)
where:
- val : Args
Values
- Aud is a total ordering on Values (an audience )
(A,B) Defeat iff (A,B)
val(B) >Aud val(A)
Attack and it is not the case that
(Args,Attack,Values,val,Aud)
(Args, Defeat)
* T. J. M. Bench-Capon. Persuasion in practical argument using value-based argumentation
frameworks. Journal of Logic and Computation, 13(3):429 448, 2003.
Value based argumentation over
Action
- A and B are arguments for administering drugs A and B respectively
both arguments promote the value of patient health
- C claims that drug A is costly
- D claims that drug A has a harmful side effect
safety
D safety
health A
C cost
health
B
+
health
cost
P. M. Dung. On the acceptability of arguments and its fundamental role in
nonmonotonic reasoning, logic programming and n-person games. Artificial
Intelligence, 77:321 357, 1995.
A. Bondarenko, P.M. Dung, R.A. Kowalski, and F. Toni. An abstract, argumentationtheoretic approach to default reasoning. Artificial Intelligence, 93:63 101, 1997.
ASPIC. Deliverable D2.1: Theoretical frameworks for argumentation
http://www.argumentation.org/ public deliverables.htm. June, 2004.
D safety
xA
References
B
C cost
L. Amgoud and C. Cayrol. A reasoning model based on the production of acceptable
arguments. Annals of Mathematics and Artificial Intelligence, 34(1-3):197 215, 2002.
T. J. M. Bench-Capon. Persuasion in practical argument using value-based
argumentation frameworks. Journal of Logic and Computation, 13(3):429 448, 2003.
Overview
Argumentation based Agent
Reasoning
Argumentation as a semantics for reasoning
about beliefs, goals and actions
Extending the argumentation semantics to
provide for adaptive, flexible reasoning and
behaviour
Examples: reasoning about beliefs and goals
Argumentation Semantics for
Agent Reasoning
Abstract Dung argumentation framework (Args,Attack) - Args and
Attack defined by some underlying theory in logic
(Args,Defeat) - (A,B) Defeat iff (A,B) Attack and B >Pref A
iff is the claim of a justified argument in Args
Argumentation Semantics for
Reasoning over Beliefs
Given 1,..,¬ n then is a goal
Goal is realised by sub-goals 1, ,
G=
B=
Goal base
Belief base
What accounts for the correctness of an inferred belief or goal
or choice of action is that it prevails in the face of opposing
inferences / choices
Dung semantics =
- logic neutral rational means for establishing such standards of
correctness
- underlying principles of commonsense agent reasoning
eg.
B
1,
2,
3,
n
,..,¬
eg.
1,..,¬
n
1,.., n
(ArgsB , DefeatB)
The claims of justified
arguments in ArgsB
(ArgsG , DefeatG)
n
Argumentation Semantics for
Reasoning over Goals
1,
2,
3,
P=
n
P=
Planning base
eg.
G=
Goal base
1,..,¬
eg.
1,..,¬
Argumentation Semantics for
Decision Making
G
1,
2,
3,
n
1,..,¬
n
Planning base
eg.
,
n
1,..,¬
,
1,..,¬
n
,
n
,
P
1
n
1,..,¬ n
(ArgsP , DefeatP)
(ArgsG , DefeatG)
(ArgsP , DefeatP)
The claims of justified
arguments in ArgsG =
consistent goal set
1 is the chosen plan of
action for realising goal
Challenges to Argumentation
Theory
Defining (ArgsG , DefeatG) given (ArgsG , AttackG) and PrefG
- AttackG between arguments for conflicting goals
When do goals conflict?
Challenges to Argumentation
Theory
Defining (ArgsP , DefeatP) given (ArgsP , AttackP) and PrefP
- AttackP between arguments for alternative plans of actions
and for realising intended goal state
Other kinds of attack on arguments for action,
- incompatible (hang out on beach / be at work)
- limits on resources preclude joint realisation
- Defining preferences amongst arguments for goals (PrefG)
- relative utility of goal states (beach / work)
- desire and obligation based goals (selfish versus social
preference orderings)
- feasibility of plans for realisation of goals, resource use
requires accounting for argumentation over actions !
Challenges to Argumentation Theory
Reasoning about Preferences
Defining (ArgsX , DefeatX) given (ArgsX , AttackP) and PrefX where
X = belief / goal / action
Agent adaptability and behavioural flexibility requires reasoning (arguing
about) possibly contradictory preferences warranted by different contexts,
and criteria and sources for valuating argument strength
- E.g., different sources or criteria may yield conflicting valuations of relative
strength of arguments for beliefs
- E.g., different contexts may justify social behaviour (preferring arguments for
obligation derived goals) or selfish behaviour (preferring arguments for desire
derived goals)
- E.g., different contexts may justify different value orderings and so
preferences amongst arguments for action
The claim of a justified
argument in ArgsP
e.g., argument for attacked by arguments claiming has
undesirable side effect, is costly etc (recall value based argumentation)
-
Defining preferences (PrefP) amongst arguments in ArgsP
- accounting for orderings on values
- multiple criteria (time, monetary cost
.)
Example : Different Criteria Warranting
Contradictory Preferences
A = today will be hot in London since the BBC forecast sunshine
B = today will be cold in London since the CNN forecast rain
?
?
B
A
The BBC are more trustworthy than CNN (A > B)
- B does not defeat A since A is preferred to B
- A defeats B since B is not preferred to A
Statistics show that CNN are more accurate
than the BBC (B > A)
x
A
B
x
A
B
Different criteria justify two contradictory preference orderings on A and B
resolving conflict requires argumentation about preferences
Arguments expressing preferences
Arguments expressing preferences
x
A = today will be hot in London since the BBC forecast sunshine
B = today will be cold in London since the CNN forecast rain
C = The BBC are more trustworthy than CNN (A > B)
A
?x
B
G
C
G = The BBC forecast was for Glasgow
- C is a metalevel argument expressing a preference between other arguments
- How can such arguments be incorporated into an argumentation framework?
If A directly attacked by winning argument (G) then A is knocked out
But A s preference over B (i.e. C) should not then knock out B
Given G we now want that B is the justified argument
- Right result obtained if C directly attacks B
?
A
?x
B
C
Integrating arguments that express
preferences between arguments
Arguments attacking attacks
Recall that B defeats A if B attacks A and A is not preferred to B
Hence:
- C expressing a preference for A over B is intuitively an argument that
justifies A defending itself against B s attack so that B s attack on A does not
succeed as a defeat
- C repels B s attack on A
D = Statistics show that CNN are more accurate than BBC (B > A)
But C and D express contradictory preferences and so attack each other
- C attacks B s attack on A
E = Statistics is more rational criterion than trustworthiness (D > C)
A = today will be hot in London since the BBC forecast sunshine
B = today will be cold in London since the CNN forecast rain
C = The BBC are more trustworthy than CNN (A > B)
Cx
C?
G
C
A
B
C
C
x A
Extended Argumentation
Frameworks*
Recap on Dung frameworks (Args, Attack)
A Args is acceptable w.r.t. S iff B Args s.t. (B,A)
C S s.t. (C,B)
S=
A
E
B
A
C
Attack
Attack
C defends A
B
Cx
x
D
D?
Acceptability of Arguments in Extended
Argumentation Frameworks
An Extended Argumentation Framework (EAF) is a tuple
(Args, Attack, D) where
- Attack Args Args (as in a Dung argumentation framework)
- D Args Attack
- if (X,(Y,Z)) D and (X ,(Z,Y)) D then (X,X ), (X ,X) Attack
(
Args = {A,B,C,D,E} ,
Attack = {(A,B), (B,A), (C,D), (D,C)} ,
D = { (C,(B,A)) , (D,(A,B)) , (E,(C,D)) }
)
E
B
A
B?
?A
Bx
A
B
x
D
* S. Modgil. An abstract theory of argumentation that accommodates defeasible reasoning about preference. In
ECSQARU, pages 648 659, 2007.
* S. Modgil. Technical report: Reasoning about preferences in argumentation frameworks.
http://www.dcs.kcl.ac.uk/staff/modgilsa/ ArguingAboutPreferences.pdf
For EAF (Args, Attack, D) acceptability based on defending arguments and
attacks
A
C
B
A acceptable w.r.t. S
A
B
C
D
A not acceptable w.r.t. S
A
C E
G
B
D
F
A acceptable w.r.t. S
Defining Conflict Free Sets for
EAFs
Admissible, grounded, preferred
Acceptability Semantics for EAFs
extensions all required to be conflict free
For Dung framework S is conflict free if no A,B
Given (Args, Attack, D), acceptability and conflict free defined for EAFs,
then let S be a conflict free subset of Args.
Then: (as for Dung frameworks)
S attack each other
For EAF S is conflict free if when A,B
S and B attacks A, then
A does not attack B and there is a C
S such that C attacks B s attack on A
- S is an admissible extension iff every argument in S is acceptable w.r.t. S
- S is a complete extension iff every argument acceptable w.r.t. S is in S
S=
A
B C
- S is the grounded extension iff it is the smallest complete extension
- A is an argument for the action of invading Iraq
- S is a preferred extension iff it is a maximal complete extension
- B attacks A where B claims that the action is not legal
- S is a stable extension iff every argument not in S is attacked by an argument in S
- C claims that value promoted by A (economic self interest) is more
important than value promoted by B (abiding by UN law)
Fundamental results (and all that they imply) for acceptability
semantics of Dung frameworks also hold for extensions of an EAF
(e.g., admissible extensions form complete partial order w.r.t. set inclusion)
-A,B and C are arguments that can be consistently held together
A Semantics for Flexible and Adaptive
Agent Reasoning
Example : Reasoning About Goals
BOID architecture and programming paradigm derives goals using conditionals
(a M b) or modal rules (a
Mb) where M { Belief, Obligation,Desire,Intention}
Dung argumentation = general semantics for non-monotonic reasoning
(
iff
is the claim of a justified argument in Args )
1)
EAFs inherits fundamental results for Dung frameworks and Dung s level
of abstraction: no assumptions about how arguments expressing
preferences are constructed or criteria that determine preferences.
Hence:
D close_to_conference
3) go_to_conference
O cheap_room
4) close_to_conference
5) cheap_room
Such formalisms required for agent adaptability and flexibility
Extended Argumentation Semantics
for Reasoning over Goals
BOID rules
extended to
express priorities
on rules
G
1,
2,
3,
cheap_room
1)
I go_to_conference
2) go_to_conference
D close_to_conference
3) go_to_conference
O cheap_room
4) close_to_conference
B
B
cheap_room
close_to_conference
n
1) int1 :
Claims of justified
arguments in Args =
consistent goal set
(intentions)
close_to_conference
cheap_room
4) bel1 : close_to_conference
5) bel2 : cheap_room
6) bel3 :
budget_high
7) soc1 :
des1
ob1
8) self1 :
ob1
des1
9) def :
self1
cheap_room
close_to_conference
soc1
10) excep : budget_high
soc1 self1
11) overide_def :
go_to_conference
2) des1 : go_to_conference
3) ob1 : go_to_conference
(Args , Attack. D)
B
close_to_conference
Extending BOID Rules to Reason
About Priorities
5) cheap_room
Goal base
B
Prioritised Default Logic (PDL) semantics - candidate goal sets characterised
as PDL extensions.
Prioritisation policies on default representations of rules for deriving goals equate
with agent types, e.g.,
- social agent prioritises defaults inferring obligation derived goals
over defaults inferring conflicting desire derived goals (cheap room)
- selfish agent adopts reverse prioritisation (close to conference)
A general semantics for non-monotonic formalisms that accommodate
defeasible reasoning about preferences
However behaviour depends on context e.g., default behaviour may be social,
but selfish when remaining budget high
Extended Semantics underpins behavioural heterogeneity by accommodating
arguments justifying one agent type (prioritisation/ preference) over another
I go_to_conference
2) go_to_conference
def
excep
Instantiating an EAF
argument for desire to be close to conference [int1,des1]
argument for obligation to book cheap room = [int1,ob1]
Given bel1 and bel2 they attack each other
Instantiating an EAF
argument [bel3,excep] for exceptional selfish behaviour given
6) bel3 :
budget_high and 10) excep : budget_high
soc1 self1
[ bel3,excep] attacks argument for default social behaviour
[ overide_def ] expresses preference for exceptional over default behaviour
[ self1]
[ self1]
x
[ int1,ob1]
[ bel3,excep]
[ int1,ob1]
[ int1,des1]
[ int1,des1]
x
x
[ def]
[ soc1]
x
x
[ overide_def]
[ def]
[ soc1]
[ int1,des1] is justified argument for desire based goal to be close to conference
Conclusions
Abstract (Args, Attack) where Args and Attack defined by some
underlying theory in logic L
L
iff
is the claim of a justified argument in Args
Abstract (Args, Attack, D) where Args, Attack and D defined by logical
formalisms for agent reasoning over beliefs, goals, actions incorporating
metalevel reasoning about context
agent flexibility and adaptability
iff is the claim (belief/goal/action) of a justified argument in Args
- S. Modgil. An abstract theory of argumentation that accommodates defeasible reasoning about
preference. In ECSQARU, pages 648 659, 2007.
- S. Modgil. Technical report: Reasoning about preferences in argumentation frameworks.
http://www.dcs.kcl.ac.uk/staff/modgilsa/ ArguingAboutPreferences.pdf
- S.Modgil. An Argumentation Based Semantics for Agent Reasoning. In: Proc. Languages,
methodologies and development tools for multi- agent systems, Sept. 2007, Durham, UK.
Negotiation
Argumentation-based negotiation
A case study
Conclusions
Negotiation
Argumentation-based negotiation
A case study
Conclusions
A case study
Conclusions
Outline
Negotiation
What is negotiation?
Negotiation approaches
Modeling negotiation dialogs using
argumentation
Argumentation-based negotiation
Communication language
Negotiation protocols
Agent strategies
Sanjay Modgil1 and Leila Amgoud2
1
Agents and Intelligent Systems Group, Kings College London, UK
2 French National Research Agency, Toulouse, France
A case study
Conclusions
Negotiation
Argumentation-based negotiation
A case study
Conclusions
Negotiation
What is negotiation?
What is negotiation? (Cont.)
O = the set of all possible values of the negotiation object
Negotiation = a form of interaction in which autonomous
agents having different interests/goals try to find a
compromise on an issue, called object of negotiation
(Walton and Krabbe, 1995)
Negotiation amounts to find among elements of O the one
that corresponds to the solution
Examples:
Examples
Allocation of a set R of resources =⇒ O is the set of
allocations
the price of a car
the city where to go for holidays
the person to employ
the date and the place of a meeting
...
Negotiation
Argumentation-based negotiation
Argumentation-based negotiation
Fixing the price of a car =⇒ O is the set of possible prices
Finding the city where to go for holidays =⇒ O is the set of
cities
A case study
Conclusions
What is negotiation? (Cont.)
a1 , . . . , an = agents involved in a negotiation
1 , . . ., n = the preference relations of these agents over
O
i ⊆ O × O
Each i is the output of a decision making model
Negotiation
Argumentation-based negotiation
A case study
Conclusions
What is negotiation? (Cont.)
Problem: agents involved in a negotiation may not have the
same preference relation on O (i.e. i 6= j ,
∀i, j = 1, . . . , n)
Agents exchange offers/counter-offers and maybe other
information in order to reach an acceptable solution for all
agents
Agents may need to make concessions
Negotiation
Argumentation-based negotiation
A case study
Conclusions
Negotiation
Negotiation approaches
Argumentation-based negotiation
A case study
Conclusions
Game theoretic approaches: Basic ideas
Branch of economics
1
Game theoretic approaches
2
Heuristic approaches
3
Argumentation-based approaches
Study rational outcomes in multi-party strategic decision
making
Agents seen as utility maximizers
Given a protocol =⇒ analyze strategies and optimal
outcomes
Negotiation
Argumentation-based negotiation
A case study
Conclusions
Negotiation
Game theoretic approaches: Limitations
Argumentation-based negotiation
A case study
Conclusions
Heuristic-based approaches: Basic ideas
Assumption of perfect rationality (each agent knows the
space of possible deals, how to evaluate them, and knows
the preferences of other agents)
Agents do not necessarily know each other’s preferences
Game theory says nothing about how to compute utility
functions
Protocols do not prescribe optimal solutions
Study strategy performance empirically
Agents are only allowed to exchange proposals
The preference relation i ⊆ O × O is fixed during the
negotiation
Negotiation
Argumentation-based negotiation
A case study
Heuristic-based approaches: Limitations
Agents are only allowed to exchange proposals
Agents’ utilities or preferences are assumed fixed (i.e.
agents cannot influence other agents preferences, or
internal mental attitudes)
Conclusions
Negotiation
Argumentation-based negotiation
A case study
Conclusions
Argumentation-based approaches: Basic ideas
To explicit the origin of the relation on O
To allow agents to exchange proposals + arguments in
order to influence the preference relation of an agent
To define real negotiations
Negotiation
Argumentation-based negotiation
A case study
Conclusions
Argumentation-based approaches (Dignum et al.
2003)
Buyer: Can t you give me this 806 a bit cheaper?
Negotiation
2
Protocol = the rules of the game
who is allowed to say what and when?
when a dialogue terminates? ...
4
Outcomes
one or several possible deals
conflict
Buyer: I didn t know that, let us also look at Polo then
A case study
Agent strategies within the rules of the protocol
e.g. which offer should I make?
e.g. what information should I provide?
Buyer: Because I have a big family and I need a big car (a)
Argumentation-based negotiation
Conclusions
Argumentation-based negotiation
Communication language + Domain language
3
Negotiation
A case study
1
Seller: Sorry that is the best I can do. Why don t you go for
a Polo instead?
Seller: Modern Polo are becoming very spacious and
would easily fit in a big family. (b)
Argumentation-based negotiation
Protocol + Agent strategies =⇒ Outcome
Conclusions
Negotiation
Communication language
Argumentation-based negotiation
A case study
Conclusions
Domain language: (e.g. Sierra et al, 1998)
characterized by locutions, utterances or speech acts
Used for referring to concepts of the environment
Propose −→ making offers
Propose(a, b, Price = 200 ∧ Item = Camera, t1 )
Agent a offers agent b a camera for the price of 200 euros
Argue −→ presenting arguments
Accept −→ accepting offers
Reject(b, a, Price = 200 ∧ Item = Camera, t2 )
Agent b rejects proposal from a at time t2
Challenge −→ denying claims
...
Negotiation
Argumentation-based negotiation
A case study
Conclusions
Argumentation-based negotiation
Negotiation
Domain language: (e.g. Amgoud et al, 2007)
L = a logical language
RL ⊆ Args(L) × Args(L) = an attack relation among
arguments
Conclusions
Negotiation protocols
Protocols can be:
Explicit:
by means of finite state machines (e.g. Sierra
et al. 1998)
by means of dialogue games (e.g. Amgoud et
al. 2000-2007, McBurney et al. 2003)
O = {o1 , . . . , om } = a set of distinct proposals/offers
Args(L) = the set of all arguments that can be built from L
A case study
Implicit:
by means of logical constraints expressed as
’if-then’ rules (e.g. Kraus et al. 1998, Sadri et
al. 2001).
Negotiation
Argumentation-based negotiation
A case study
Conclusions
Negotiation
Argumentation-based negotiation
A case study
Conclusions
Negotiation protocols (Cont.) (e.g. Prakken 2005,
Amgoud et al, 2006)
Negotiation protocols (Cont.)
Different protocols have been defined (for negotiation, etc.)
A protocol is a tuple π = hL, S, Ag, Reply, Back, Turn,
N Movei:
L = logical language
Arg(L) = the set of arguments
It is difficult to compare them
S = a set of speech acts
Ag = {a1 , . . . , an } is the set of agents
It is then difficult to compare the corresponding dialogue
systems
Reply: S −→ 2S
Back ∈ {0, 1}
=⇒ Aim = to identify the different classes of protocols
Turn: T −→ Ag, where
T = {t1 , . . . , tk , · · · | ti ∈ IN, ti < ti+1 }
N Move: T × Ag −→ IN with ∀(ti , aj ), N Move(ti , aj ) > 0 iff
Turn(ti ) = aj .
Negotiation
Argumentation-based negotiation
A case study
Conclusions
Negotiation
Negotiation protocols (Cont.)
the turntaking
2
the number of moves per turn
3
the notion of Backtracking
Argumentation-based negotiation
Conclusions
A move m is a pair m = (s, x) s.t. s ∈ S, x ∈ Wff(L) or
x ∈ Arg(L).
M = the set of moves that can be built from hS, Li.
Speech returns the speech act m (Speech(m) = s),
Content returns its content (Content(m) = x)
Example: (Propose, Price=200)
A dialogue move M is a tuple hS, H, m, ti s.t:
S ∈ Ag is the speaker, Speaker(M) = S
H ⊆ Ag is the hearer, Hearer(M) = H
m ∈ M, with Move(M) = m and s.t. WFM(m) = 1
t ∈ DM is the target of the move i.e. the move which it
replies to, Target(M) = t. t = ∅ if M does not reply to any
other move.
Example: (a, b, Propose, Price=200, 0)
=⇒ identify the dialogue structure
Negotiation
A case study
Negotiation protocols (Cont.)
Three basic parameters:
1
Argumentation-based negotiation
A case study
Conclusions
Negotiation protocols (Cont.)
Negotiation
Argumentation-based negotiation
A case study
Conclusions
Negotiation protocols (Cont.)
A dialogue d on a subject ϕ under π, is a sequence d = M1,1 ,
. . . , M1,l1 , . . . , Mk,1 , . . . , Mk ,lk , . . . s.t:
A dialogue subject is ϕ s.t ϕ ∈ Wff(L), or ϕ ∈ Arg(L).
The goal of a dialogue is to assign to its subject ϕ a value
v (ϕ) in a domain V s.t:
If ϕ ∈ Wff(L), then v (ϕ) ∈ V = V1 × · · · × Vm
If ϕ ∈ Arg(L), then v (ϕ) ∈ V = {acc, rej, und}.
ϕ ∈ Wff(L) or ϕ ∈ Arg(L).
∀Mi,j , i ≥ 1, 1 ≤ j ≤ li , Mi,j ∈ DM
∀i, i ≥ 1, Speaker(Mi,j ) = Turn(ti )
∀i, i ≥ 1, li = N Move(ti , Speaker(Mi,li ))
If Target(Mi,j ) 6= ∅,
then Speech(Move(Mi,j )) ∈ Reply(Speech(Move(Target(Mi,j ))))
∀j, 1 ≤ j ≤ l1 , Target(M1,j ) = ∅
∀Mi,j , i > 1, Target(Mi,j ) = Mi 0 ,j 0 s.t:
If Back = 1, then 1 ≤ i 0 < i and 1 ≤ j 0 ≤ li 0
If Back = 0, then [(i − (n − 1)) ≤ i 0 < i] and 1 ≤ j 0 ≤ li 0 ,
where [(i − (n − 1)) ≥ 1] and n is the number of agents.
Negotiation
Argumentation-based negotiation
A case study
Conclusions
Negotiation
Negotiation protocols (Cont.)
Negotiation
If any dialogue conducted under a class Π1 of protocols
has an equivalent dialogue under Π2 , then we write
Π1 ⊆ Π2
Subject(d1 ) ≡ Subject(d2 )
Outcome(d1 ) = Outcome(d2 )
a1 : Offer(x) Argue(S, x)
a2 : Accept(x)
a1 :
a2 :
a1 :
a2 :
a1 :
a2 :
Offer(x)
Refuse(x)
Why refuse(x)?
Argue(S 0 , ¬x), (where S 0 ` ¬x)
Argue(S, x), (where S ` x)
Accept(x)
Argumentation-based negotiation
A case study
Conclusions
Let π1 , π2 be two protocols. π1 is equivalent to π2 , π1 ≈ π2 ,
iff ∀d1 ∈ Dπ1 , ∃d2 ∈ Dπ2 s.t. d1 ∼ d2 , and ∀d2 ∈ Dπ2 ,
∃d1 ∈ Dπ1 s.t. d2 ∼ d1 .
d1 ∈ Dπ1 , d2 ∈ Dπ2 two finite dialogues. DM 1 , DM 2 = the
set of dialogue moves of d1 and d2 . d1 is equivalent to d2 ,
d1 ∼ d2 , iff:
2
A case study
Negotiation protocols (Cont.)
Outcome : Dπ −→ V ∪ {?}, s.t. Outcome(d) =? or
Outcome(d) = v (Subject(d)).
1
Argumentation-based negotiation
Conclusions
x = B̄ if Back = 0, x = B if Back = 1
y = T if Turn requires taking turns, y = T̄ otherwise
z = S if N Move allows for single move per turn, z = M
otherwise
Negotiation
Classes of protocols (Cont.)
Argumentation-based negotiation
A case study
Conclusions
Agent strategies
Let n be the number of agents
1
2
If n = 2, then
ΠB̄TS ⊆ ΠB̄ T̄ M ≈ ΠBTM ≈ ΠB T̄ S ≈ ΠB̄TM ≈ ΠB̄ T̄ S ≈ ΠBTS ⊆
ΠB T̄ M
If n > 2, then
1
Which offer/proposal to send/to accept?
2
Which argument to send/to accept?
3
Which move (locution + content) to play?
ΠB̄TS ⊆ ΠBTS ≈ ΠB̄TM ⊆ ΠBTM ⊆ ΠB T̄ S ≈ ΠB̄ T̄ M ⊆ ΠB T̄ M ,
ΠB̄TS ⊆ ΠBTS ≈ ΠB̄TM ⊆ ΠB̄ T̄ S ⊆ ΠB T̄ S ≈ ΠB̄ T̄ M ⊆ ΠB T̄ M .
Negotiation
Argumentation-based negotiation
A case study
Conclusions
Which offer/proposal to send/to accept?
O = Oa ∪ On ∪ Ons ∪ Or
Negotiation
Argumentation-based negotiation
A case study
Which argument to send?
Arguments that will change the status of arguments sent
by other party (Amgoud and Maudet, 2002, Prakken 2005)
To propose the best offer o w.r.t. The shortest argument (Amgoud and Maudet, 2002)
To defend o with arguments/counter-arguments
Arguments built from beliefs shared by other agents
(Amgoud and Maudet, 2002)
If still rejected by other agents, to propose a less preferred
one
Conclusions
To start by the weakest arguments (Explanation → Reward
→ Threat) (Sarit et al., 1998)
Negotiation
Argumentation-based negotiation
A case study
Conclusions
Negotiation
A case study
Argumentation-based negotiation
A case study
Conclusions
A case study (Cont.)
Each agent is equipped with a theory hA, F, Ri
A ⊆ Args(L)
F: O → 2A
R ⊆ RL s.t. R ⊆ A × A
Communication language: Propose and Argue
Domain language: L ∪ {θ}, O, Arg(L), RL
m1 , . . ., mj is a sequence of moves. The theory of agent i
at a step t ≤ j is hAit , Fti , Rit i s.t.
Protocol:
Ait = Ai0 ∪ {ai=1,...,t , ai = Argument(mi )} ∪ A0 with A0 ⊆
Args(L)
i
Fti = O → 2At
i
i
Rt = R0 ∪ {(ai , aj ) | ai = Argument(mi ),
aj = Argument(mj ), i, j ≤ t, and ai RL aj } ∪ R0 with R0 ⊆
RL
=⇒ the order on O may change
ΠBTM
Ag = {P, C}
A move is a tuple mi = hpi , ai , oi , ti i such that:
pi ∈ {P, C}
ai ∈ Args(L) ∪ θ
oi ∈ O ∪ θ
ti ∈ IN∗ is the target of the move, such that ti < i
Player(mi ) = pi , Argument(mi ) = ai , Offer(mi ) = oi
Agents strategies: Best offer, Arguments that will change
the status of arguments
Negotiation
Argumentation-based negotiation
A case study
Conclusions
Negotiation
A case study (Cont.)
Argumentation-based negotiation
A case study
Conclusions
A case study (Cont.)
Let d = m1 , . . ., ml be a negotiation dialogue.
Question: what is an optimal solution?
d terminates iff
∃j, k < l such that:
Offer(mj ) = Offer(mk )
Definition (Optimal solution)
Player(mj ) 6= Player(mk )
Let O be a set of offers, and o ∈ O
An agent has no move to play
P
The offer o is an optimal solution at a step t > 0 iff o ∈ Ot,a
C
∩ Ot,a
The outcome of d is Outcome(d) = Offer(ml ) iff ∃j < l s.t.
Offer(ml ) = Offer(mj ), and Player(ml ) 6= Player(mj ).
Otherwise, Outcome(d) = θ.
Negotiation
Argumentation-based negotiation
A case study
Conclusions
Negotiation
A case study (Cont.)
Theorem (Completeness)
Let d = m1 , . . . , ml be a negotiation dialogue.
P ∩ O C 6= ∅, then Outcome(d) ∈ O P
If ∃t ≤ l such that Ot,a
t,a
t,a
C .
∩ Ot,a
Argumentation-based negotiation
A case study
Conclusions
A case study (Cont.)
Theorem (Soundness)
Let d = m1 , . . . , ml be a negotiation dialogue.
1
2
P
If Outcome(d) = o, (o 6= θ), then ∃t ≤ l such that o ∈ Ot,x
C , with x, y ∈ {a, n, ns}.
∩ Ot,y
P ∩ O C = ∅, ∀
If Outcome(d) = θ, then ∀t ≤ l, Ot,x
t,y
x, y ∈ {a, n, ns}.
Negotiation
Argumentation-based negotiation
A case study
Conclusions
Negotiation
Argumentation-based negotiation
Example 1 (without argumentation)
A case study
Conclusions
Example 1 (Cont.)
=⇒ Four possible dialogues
O = {o1 , o2 }
P: m1 = hP, θ, o1 , 0i
C: m2 = hC, θ, o1 , 1i
P and C are equipped with the same theory hA, F, Ri s.t.
Outcome(d) = o1
A=∅
F(o1 ) = F(o2 ) = ∅
R=∅
P: m1 = hP, θ, o1 , 0i
C: m2 = hC, θ, o2 , 1i
o1 , o2 ∈ Ons for both agents
P: m3 = hP, θ, o2 , 2i
Outcome(d) = o2
Negotiation
Argumentation-based negotiation
A case study
Conclusions
Negotiation
Example 1 (Cont.)
Conclusions
O = {o1 , o2 }
C: m2 = hC, θ, o2 , 1i
Theory of agent P is hAP , F P , RP i s.t.
Outcome(d) = o2
AP = {a1 , a2 }
F P (o1 ) = {a1 }, F P (o2 ) = {a2 }
RP = {(a1 , a2 )},
P: m1 = hP, θ, o2 , 0i
a1 is accepted whereas a2 is rejected
C: m2 = hC, θ, o1 , 1i
P: m3 = hP, θ, o1 , 2i
o1 is acceptable whereas o2 is rejected
Outcome(d) = o1
Argumentation-based negotiation
A case study
Example 2 (Dynamic theories)
P: m1 = hP, θ, o2 , 0i
Negotiation
Argumentation-based negotiation
A case study
Example 2 (Cont.)
O = {o1 , o2 }
Theory of agent C is hAC , F C , RC i s.t.
C
A = {a1 , a2 , a3 }
F C (o1 ) = {a1 }, F P (o2 ) = {a2 }
RC = {(a1 , a2 ), (a3 , a1 )},
Conclusions
Negotiation
Argumentation-based negotiation
A case study
Example 2 (Cont.)
P: m1 = hP, θ, o1 , 0i
C: m2 = hC, θ, o2 , 1i
P: m3 = hP, a1 , o1 , 2i
C: m4 = hC, a3 , θ, 3i
C: m5 = hP, θ, o2 , 4i
a2 and a3 are accepted whereas a1 is rejected
At step 4, the theory of P is AP = {a1 , a2 , a3 }, RP =
{(a1 , a2 ), (a3 , a1 )}
o1 is rejected whereas o2 is acceptable
a2 was rejected becomes accepted
a1 was accepted becomes rejected
o2 is thus an acceptable offer
Conclusions
Negotiation
Argumentation-based negotiation
A case study
Conclusions
Conclusions
An argumentation-based approach for negotiation
2
The role of argumentation in negotiation dialogues is
analyzed
3
The different classes of protocols are known
Argumentation-based negotiation
Argumentation-based negotiation
A case study
Conclusions
References
1
Negotiation
Negotiation
A case study
L. Amgoud, Y. Dimopoulos, and P. Moraitis. A Unified and
General Framework for Argumentation-based Negotiation.
AAMAS’2007.
L. Amgoud, S. Parsons, and N. Maudet. Arguments,
dialogue, and negotiation. ECAI’ 2000.
L. Amgoud and H. Prade. Explaining qualitative decision
under uncertainty by argumentation. AAAI’06.
P. M. Dung. On the acceptability of arguments and its
fundamental role in nonmonotonic reasoning, logic
programming and n-person games. AIJ’1995.
N. R. Jennings, P. Faratin, A. R. Lumuscio, S. Parsons, and
C. Sierra. Automated negotiation: Prospects, methods and
challenges. International Journal of Group Decision and
Negotiation, 2001.
A. Kakas and P. Moraitis. Adaptive agent negotiation via
argumentation. AAMAS’2006.
Conclusions
References (Cont.)
S. Kraus, K. Sycara, and A. Evenchik. Reaching
agreements through argumentation: a logical model and
implementation. AIJ’1998.
S. Parsons and N. R. Jennings. Negotiation through
argumentationa preliminary report. ICMAS’1996.
I. Rahwan, S. D. Ramchurn, N. R. Jennings, P. McBurney,
S. Parsons, and E. Sonenberg. Argumentation-based
negotiation. Knowledge Engineering Review, 18
(4):343375, 2003.
J. Rosenschein and G. Zlotkin. Rules of Encounter:
Designing Conventions for Automated Negotiation Among
Computers,. MIT Press, Cambridge, Massachusetts,
1994., 1994.
K. Sycara. Persuasive argumentation in negotiation.
Theory and Decision, 28:203242, 1990.
F. Tohmé. Negotiation and defeasible reasons for choice.
Symposium on Qualitative Preferences in Deliberation and
Practical Reasoning, pages 95102, 1997.