protocols for the greater good

Co-Utility
PROTOCOLS FOR THE GREATER GOOD
J. Domingo-Ferrer, O. Farràs, S. Martínez,
D. Sánchez, J. Soria-Comas
http://crises-deim.urv.cat/co-utility/
Co-Utility Intuition

A protocol is a sequence of actions prescribed for an interaction
between agents.

In a distributed environment, for a protocol to be effective the
agents involved must be willing to follow it (self-enforcement).

Self-enforcing protocols are not possible in presence of agents that
act arbitrarily. We restrict to rational agents.

While self-enforcement is essential, there are additional properties
that can make a protocol more interesting.

We focus on protocols that promote mutually beneficial
collaboration between agents. We refer to them as co-utile
protocols.
Scenario Set-up

Game theory is the natural framework to model interactions
between rational agents.

To deal with sequential protocols in which the current state is known,
perfect-information games are the natural model.
We view perfect-information games in extensive form as a tree where:
• Internal nodes are:
decision-making points
labeled with the name of the agent making the decision
• Outgoing edges in a node represent the actions available to the
agent making the decision
• Leaf nodes are labeled with the utility each of the agents gets if the
leaf is reached
Protocol

A protocol prescribes a way of traversing the game tree.
Given a tree representing a perfect-information game G, a protocol is
either a path from root to leaf or a subtree from the root to several leaves.
In the latter case, alternative edges are labeled with the probability of
being chosen.
The tree represents all possible interactions
between agents.
The protocols prescribing (C,C) (in red) and (D,D)
(in green) are highlighted.
Self-Enforcing Protocol


Effective protocols in a distributed environment need agents to stick
to them.

Rational agents deviate if they expect better outcomes from deviating.

No agent should be able to increase her utility by deviating, provided
that the other agents stick to the prescribed behavior.
For self-enforcement, sticking to the action prescribed by the
protocol must be an equilibrium of the remaining subgame
A protocol P on G is self-enforcing if P is a subgame perfect equilibrium of G
(D,D) is the only self-enforcing protocol
Co-Utility (I)

Promoting collaboration between agents is another interesting property
that protocols can have. We call such protocols co-utile.

A group of agents follow a collaborative protocol if there is no
alternative protocol whereby all of them could get a better outcome
and at least one a strictly better outcome (Pareto optimality).
A self-enforcing protocol is co-utile if the outcome is Pareto-optimal and
the utility of each participating agent is strictly greater than the utility she
gets when not participating.

Both self-enforcement and Pareto optimality are essential to optimum
mutual benefit.

If a protocol is not self-enforcing, it will not be followed.

If it is not Pareto-optimal, better collaboration alternatives exist.
Co-Utility (II)
(D,D) is the only self-enforcing protocol
(D,D) is Pareto-dominated by (C,C)
↓
There is no co-utile protocol for this game
(F,S) is self-enforcing and Pareto-optimal
↓
(F,S) is co-utile
Strict Co-Utility

A specially strong case of mutual collaboration happens when
each agent reaches her/his maximum utility.
A protocol is strictly co-utile if each agent reaches its maximum utility.

It is easy to check that a strictly co-utile
protocol is co-utile. That is, the following
conditions are satisfied:

Self-enforcement

Pareto optimality
Selfish Behavior and Co-Utility

A co-utile protocol, being self-enforcing, will be adhered to by
selfish agents.

It would be nice if we could guarantee that selfish behavior always
leads to co-utility. However, that is not the case, even if a co-utile
protocol exists.

In some cases this guarantee can be given
In a perfect-information game where all the agents maximize her utility
together in the same leaf nodes (and only in them), selfish behavior is coutile.
Application:
Anonymous Web Search (I)
When submitting a query to a web search engine (WSE), a user is telling
the WSE about her interests.
 WSEs usually build profiles of interests of the users.




E.g. for targeted advertising.
While a profile can be considered to be a threat to privacy, the fact is
that WSEs keep track of the exact queries submitted by each user (e.g.
see the web search history in Google accounts)

The searches done may tell sensitive information about a user.

See A face is exposed for AOL searcher no. 4417749
Two main strategies have been proposed to keep privacy:

Hiding actual queries within a set of fake queries.


Only protects against profiling. The actual queries are still linked to the user.
Delegate query submission to a community of peers.
Application:
Anonymous Web Search (II)

For the more exhaustive protection, we focus on query
delegation.

Single-hop scenario

There is a community of N peers.

When an agent wants to submit a query to the WSE, she
chooses between:


Submitting the query herself

Asking another peer to submit the query on her behalf
When one of the peers receives a query for submission, she can
either:

Submit the query to the WSE and return the answer to the originator

Reject the query submission
Application:
Anonymous Web Search (III)

Single-hop protocol and co-utility:

The initiator forwards the query to another peer if that improves her
privacy

The recipient submits the query if that helps her flatten her profile

(Forward, Submit) is co-utile

The single-hop protocol protects privacy against the WSE but not
against other peers (query forwarding reveals the initiator’s interests)

We use a multi-hop protocol to protect against WSEs and other
peers

The receiving agent cannot determine the originator
Application:
Anonymous Web Search (IV)

Multi-hop protocol for anonymous web search


The query initiator can either:

Submit the query to the WSE herself

Forward the query to another peer for submission
The query recipient can either:

Submit the query to the WSE and return the response

Forward the query to another peer and return the response
Application:
Ride Sharing (I)


The most basic scenario consists of two agents:

Each agent owns a car and wants to go from location A to location B

Each agent is interested in minimizing the travel time and the expenses
The common way to proceed is for each agent to travel by her own
means


The travel time is T and the expenses E
Travelling together is co-utile. By travelling together the agents can:

Share the travel expenses (gas, toll)


Effect: the travel cost for each agent is divided (e.g. E/2)
Make use of high occupancy vehicle lanes

Effect: the travel time is decreased because of HOV lanes
Application:
Ride Sharing (II)

In the above scenario, the best for the agents is to share the ride



More complex scenarios include:

The origin of the agents is not the same

The destination of the agents is not the same

Agents are reluctant to travel with strangers
Having different origins and/or destinations increases the travel
time and expenses for the car owner


Sharing the ride is co-utile
Additional utilities in the form of a compensation for the additional work
may be needed to attain co-utility
Dealing with the reluctance to travel with strangers may require
introducing reputation mechanisms to mitigate concerns.
Application:
Ride Sharing (III)


The matching between car owner and passenger determines how
satisfactory the ride is for the agents:

Travel time for both is fixed (approximately) by the matching of agents

The profile/reputation of agents is available at the time of matching agents

The net expenses depend on the compensation that the passenger
contributes
Among all factors, the compensation given to the car owner is the only
parameter that can be adjusted. In terms of the other factors:

The car owner has a function that determines the required compensation

The passenger has a function that determines the compensation she is
willing to offer
Application:
Ride Sharing (IV)
We set up a “ride market” where agents post their offers/demands for
rides.

•
Each car owner posts her offer to the market. The offer includes the conditions that
suitable passengers must fulfill as well as the expected compensation
•
Each passenger posts her demand to the market. The offer includes the conditions that
the suitable car owners must fulfill as well as the offered compensation.
•
For a given class of rides (the rides satisfying some conditions), the market has two
different compensations:
•
•
The expected compensation, which is the minimum of the compensation required by the car
owners
•
The offered compensation, which is the maximum of the compensations offered by the passengers
The offered compensation is usually less than the expected compensation. If they
become equal, the corresponding car owner offer and passenger demand are matched
and removed from the market.
Application:
Crowdsourcing (I)

Crowdsourcing means outsourcing a task to an anonymous group
of self-interested individuals by means of an open call to the crowd
offering rewards for work

The crowdsourcing marketplace acts as the link between the crowd
of workers and the crowdsourcers of tasks.

Thus, it facilitates mutually beneficial cooperation (co-utility)
Application:
Crowdsourcing (II)


In crowdsourcing the market place:

Is used by the crowdsourcers to publish tasks (including associated
requirements and pay offered)

Is used by the workers to contact crowdsourcers that have published
suitable tasks.
The crowdsourcing market place originates co-utile interactions:

Each agent acts according to her interests (selfish behavior)
→ the interaction is self-enforcing.

The tree/game of possible interactions guarantees that if a transaction
happens, it is strictly co-utile.
Application:
Traceable P2P Content Distribution (I)

Fingerprinting digital contents is an option to protect the rights of the
authors

A different and imperceptible mark is embedded in each distributed
content

The embedded mark can be used to identify the content buyer
Most fingerprinting schemes in the literature are centralized. Hence, the
distribution of such fingerprinted content is basically unicast
 Peer-to-peer content distribution is a more cost-effective and scalable
way of distributing content
 We would like to design a P2P fingerprinting scheme to make P2P
content distribution compatible with the protection of authors’/owners’
rights
 We design a co-utile P2P fingerprinting scheme that is co-utile for honest
agents

Application:
Traceable P2P Content Distribution (II)


The P2P anonymous fingerprinting is DNA-inspired:

Initially, there are N fingerprinted copies of the content

The content is downloaded using the P2P network by assembling fragments
from multiple parents. This produces a new content that is unique in its
combination of fingerprinted fragments
A correlation test can be used to trace any traitor
Application:
Traceable P2P Content Distribution (II)

Downloading fragments from several parents is co-utile for honest
agents (both parents and children)

The parent is not interested in sending her entire copy of the
content to any single child


Otherwise, an honest parent could be held guilty for any dishonest
behavior of the child
The child is not interested in receiving her copy from a single parent

Otherwise, an honest child could be held guilty for any dishonest
behavior of the parent
Reputation Management (I)

Reputation management can be used to address some issues that
appear in the design of co-utile protocols

In ride sharing, an agent may be reluctant to travel with a stranger


Reputation management can be used to provide evidence of the past
behavior and mitigate concerns
In the P2P content distribution, a downloader may not be willing to
act as seed

Reputation management can be used to keep track of the contribution
of each agent and exclude pure downloaders
Reputation Management (II)

Being aimed at distributed systems, the reputation mechanism must
also be distributed

Being aimed at making protocols co-utile, the reputation
mechanism must also be co-utile

We have designed a co-utile reputation management mechanism
based on the EigenTrust reputation mechanism:

To thwart the creation of fictitious agents, newcomers start with the
lowest reputation

To thwart self-promotion, the reputation of an agent is managed by a
set of randomly selected agents

To motivate agents to participate, the influence of an agent’s opinions
increases with her participation
Further Co-Utility Applications

Social networks

Environmental agreements

Collaborative microdata anonymization

Yours!
Bibliography

Domingo-Ferrer, J., Sánchez, D., Soria-Comas, J. (2016)
Self-enforcing collaborative protocols with mutual help, Progress in Artificial Intelligence (to
appear)

Domingo-Ferrer, J., Soria-Comas, J., Ciobotaru, O. (2015)
Coutility: self-enforcing protocols without coordination mechanisms, in IEEE IEOM 2015, pp.
1-7.

Sánchez, D., Farràs, O., Martínez, S., Domingo-Ferrer, J., Soria-Comas, J. (2015)
Self-enforcing protocols via co-utile reputation management (submitted)

Turi, A.N., Domingo-Ferrer, J., Sánchez, D., Osmani, D. (2015)
Co-Utility: conciliating individual freedom and common good for the crowd-based
business model (submitted, 2nd revision)

Megías, D., Domingo-Ferrer, J. (2014)
Privacy-aware peer-to-peer content distribution using automatically recombined
fingerprints, Multimedia Systems 20(2):105-125

Soria-Comas, J., Domingo-Ferrer, J. (2015)
Co-utile collaborative anonymization of microdata, in MDAI 2015, LNCS 9321, pp. 192-206

Sanchez, D., Martínez S., Domingo-Ferrer, J. (2016) Technical comment on “Unique in the
shopping mall: on the reidentifiability of credit card metadata”, Science, to appear