A Comparative Analysis of Trust Models for Multi

Lecture Notes on Software Engineering, Vol. 1, No. 2, May 2013
A Comparative Analysis of Trust Models for Multi-Agent
Systems
Vimala Balakrishnan and Elham Majd

The following section introduces the standard components
before presenting the trust models. Finally, the results of the
comparison and analysis are discussed.
Abstract—Multi-agent systems can break interactions in
distributed and heterogeneous environments, therefore, trust
models play a critical role in determining how interactions take
place between multiple agents. This paper compared and
analyzed some recent trust models based on five important
components: architecture, dimension, initial trust, reputation,
and risk. Overall results indicate risk and initial trust to be the
weakest components, whereas dimension, reputation and
architecture are the strong components of the existing trust
models. This analysis helps to introduce the standard
components of trust and reputation models for e-commerce
multi-agent systems, and to identify the weakness and strengths
of the models based on these standard components.
Index Terms—Architecture,
reputation, risk.
dimension,
initial
II. STANDARD COMPONENTS
Table I below provides the standard components and their
definitions.
TABLE I: THE STANDARD COMPONENTS
Components
Architecture
Initial trust
Dimension
trust,
Risk
Reputation
I. INTRODUCTION
Trust is a multi-dimension entity which concerns various
attributes such as reliability, dependability, security and
honesty, among others. Trust can be basically defined as “a
particular level of the subjective probability with which an
agent assesses that another agent or group of agents will
perform a particular action” [1]. The concept of trust has been
studied extensively in myriad of domains, particularly in
multi-agent systems (Marsh, Sporas etc.), whereby agents
collaboratively interact with each other [2].
However, dealing with trust between humans and agents in
multi-agent system is important, especially in e-commerce.
Attracting customers is crucial, therefore winning the trust of
the customers becomes central. While the agents have partial
knowledge of their environment and peers, trust plays a vital
role to safeguard these interactions [3].
Different trust models have been suggested in the literature,
such as MARSH [4], SPORAS [5], and ReGreT [6], among
others. These models differ from one another based on
several standard components, such as the architecture in
which they were built on, or in the mechanism used to
evaluate risk of a transaction. Therefore, it is believed a
comparative analysis of these models based on some of the
identified components is needed to gain a better
understanding of the trust models. This paper intends to
compare and analyze some of the most representative trust
models based on five standard components, namely,
architecture, dimension, initial trust, risk and reputation.
Manuscript received December 12, 2012; revised April 18, 2013.
The authors are with the Faculty of Computer Science and Information
Systems, University of Malaya, Kuala Lumpur, Malaysia (e-mail:
[email protected], [email protected]).
DOI: 10.7763/LNSE.2013.V1.41
183
Definition
The identification and description of the system
components and their inter-relationships
Trusting a new agent for the first time
Sources considered for evaluating trust
Evaluation of the negative consequences of each
interaction
A collection of other agents trust value about a
specific agent
Architecture describes a system, its components and also
their inter-relationships. Generally, it can be categorized as
centralized or distributed. A centralized architecture is based
on a central agent whereas in a distributed architecture, the
agents keep track of all the agents’ activities. The centralized
approach is not deemed to be appropriate for a dynamic
environment as the network node that houses the central data
may not be accessible all the time [5]. In contrast, the user
models are maintained locally by the agents in a distributed
architecture. It is not necessary to reveal personal
information to a central server, and agents also communicate
with one another to collect information or find resources and
experts in order to pursue their users’ [7].
Trust can be computed in various perspectives and the
classification of the trust value is known as dimension. A
trust relationship can involve multiple dimensions according
to the type of interactions between two agents [8], [9]. Most
of the existing computational trust models identify different
dimensions for evaluating trust. For instance, ReGret [6] and
Trust Computation Model (TCM) [10] models have grouped
views whereby trust evaluation and decision making are
accomplished by a group of agents, instead of an individual
agent.
Initial trust is related to trusting a new agent which is
attempting to interact. A new agent in the environment may
cause a dilemma, especially for systems which base trust
solely on reputation. Trust models should allocate an
arbitrary level of trust, generally a median value to this new
agent [7], however, the provision of proper initial trust or
reputation value to each newcomer should be done carefully
as it will have a direct effect on the selection of profitable
Lecture Notes on Software Engineering, Vol. 1, No. 2, May 2013
solution to deal with unreliable opinions. Basically, HISTOS
reduces the risk of interactions with a good agent by
receiving reliable recommendations from friends.
Trust Computation Model (TCM) is based on a set of
required factors to be known prior to making a trust decision
[10]. These factors include information about other agents’
knowledge base, and interactions during the current task. A
key feature of this model is that it considers how much
knowledge one should have about the trustee agents in order
to make a trust decision. TCM model defines direct trust
based on three concepts: familiarity, similarity and past
experience, whereas indirect trust is defined based on
recommendations.
Finally, ReGret [6] is a decentralized system in which
social relations among agents play a critical role[17]. ReGreT
manages reputation in three different dimensions: individual,
social and ontological interactions. Additionally, its
reputation mechanism includes three specialized reputation
types, differentiated by the information sources: witness,
neighborhood, and system reputations. After every
interaction, each agent rates its partner’s performance, and
the ratings are saved in a local database. An agent applies the
information of the stored ratings to evaluate another agent’s
trust by querying its local database.
providers [2]. One possible solution is to incorporate a
reputation brokering mechanism in which each user can
customize its strategies according to the risk implied by the
reputation values of its potential counterparts [11]. Other
solutions include using pseudonymous identities [5] and trust
learning by observation whereby the task of trust learning is
transferred to a statistical method, such as Bayesian learning
[12].
Reputation is a total measure of trust by all the agents in a
network of a service provider [7]. When an agent has to select
the most promising interlocutors, it should be capable of
allocating a proper weight to the reputation to determine the
reliability [9]. Reputation values can be categorized as
endogenous and exogenous. Endogenous reputation value
relates to the concept of reciprocity, meaning an agent trusts
its friends more than strangers. In contrast, exogenous
reputation value accepts the reputation ratings presented by
unknown agents [13].
Finally, risk refers to any potential or negative
consequences that took place during interactions. It is known
that any association or cooperation is less likely when high
risks are involved, unless the benefits outweigh the risks.
Hence, trust motivates entities to accept a risk when they
interact with others [14]. Interestingly not many trust models
emphasized on risk, except for a few. For instance,
uncertainty was identified as an essential factor in risk
management and determination of reliability in [15]. As a
rule of thumb, before initiating any interactions agents should
evaluate the risk of interactions and trustworthiness of the
entities to be engaged.
IV. COMPARISONS AND DISCUSSION
The trust models in the previous section are compared and
analyzed based on the five standard components. The results
are summarized in Table II. Generally it can be concluded
that risk and initial trust are the weakest components,
whereas dimension, reputation and architecture are the strong
components of these models.
III. TRUST MODELS
This section describes some of the most representative
trust models, focusing only on their main characteristics.
One of the first trust models proposed was Marsh, which
was modeled in three dimensions: basic, general and
situational trusts [4]. Basic trust is related to good
experiences which lead to a greater position of trust, and vice
versa. General trust is the trust one agent has in another
without taking into account any particular situation whereas
situational trust is the amount of trust an agent has in another,
taking into account a specific situation. Marsh recognized
trust as the risk involved in trusting another agent. This
risk-based approach also combines the knowledge about
either local or global information of the agents involved. One
of the advantages of this model is the way in which trust can
be modified.
SPORAS was introduced to improve online reputation
models [5]. It is meant for a loosely connected environment,
in which agents share the same interest. The reputation value
is calculated by aggregating users’ opinions, and the two
most recent agents are considered for gathering the rating
values. SPORAS offers a new approach to measure the
reliability of each interaction based on the standard deviation
of reputation[16]. It is one of the rare models which
considered reliability in its rating method.
HISTOS was later introduced as an improvement to
SPORAS [3].It is based on the principle that an agent trusts
its friends more than strangers; hence it provides a simple
TABLE II: THE COMPARATIVE ANALYSIS OF TRUST MODELS
Architecture Dimension
1 Distributed
Basic, general,
and situational
2 Centralized Indirect Trust
3 Centralized Indirect Trust
Initial Trust
Reputation
N/A
N/A
Limitation in
Reputation Exogenous
level
Limitation in
Reputation Endogenous
level
Direct and
N/A
N/A
Indirect
Individual,
5 Distributed social, and
N/A
Endogenous
ontological
1- MARSH, 2- SPORAS, 3- HISTOS, 4- TCM, 5- ReGret
4 Distributed
Risk
The
importance of
agent’s
competence
level
N/A
N/A
N/A
N/A
Most of the models are based on distributed architecture
except for HISTOS and SPORAS, and therefore they are
more suitable for open multi-agent systems. We believe these
two architectures can be integrated so that the previous
activities are recorded by the agents, and also by the central
server.
As for the dimensions, they can be divided into three
aspects, namely, services, information sources, and the
number of agents participating in an interaction. Models such
as MARSH and ReGret considered the services that should
184
Lecture Notes on Software Engineering, Vol. 1, No. 2, May 2013
[6]
be provided by trustee agents using ontological and
situational dimensions, respectively. Information sources
focus on the sources which can collect information for
calculating trust and reputation values of provider agents.
Dimension is also represented based on the interaction
between two agents involving trustee and trustor agents, such
as TCM and ReGret which emphasized on the number of
agents to compute trust values in order to make a decision for
interactions.
One notable observation is that none of the models
emphasized on initial trust with newcomers as a dimension of
trust value. More studies are required in the area of trust
value process, especially when the newcomer agent can
provide critical and beneficial service to the customer agent.
HISTOS and SPORAS define a minimum level of reputation
ratings for newcomers, which are updated after each
interaction. However, SPORAS does not allow the reputation
rating of a newcomer to exceed the reputation ratings of other
previous users.
As for reputation, ReGret and HISTOS evaluated
reputation using endogenous approach, in which
personalized reputation value of the agents is based on the
group it belongs to. This reputation mechanism can be
applied in highly connected communities. Other trust models
computed reputation values based on information from any
agents in the system, which is, using the exogenous
approach.
Among these trust models, risk received the least attention.
The element of risk is a very critical factor for each
interaction; hence, there is a need to incorporate more
consideration for risk in designing future trust models. As
shown in Table II, MARSH is the only model that presents a
simple approach to manage the risk of each interaction.
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
V. Balakrishnan received her Ph.D. in the field of
Ergonomics in 2009 from Multimedia University,
Malaysia. Both her Masters and Bachelor degrees
were from University of Science, Malaysia.
She is currently affiliated with the Faculty of
Computer Science and Information Technology,
University of Malaya as a Senior Lecturer. Most of
her research works are in the field of data
engineering, opinion mining, information retrieval
and health informatics.
Dr. Balakrishnan is also a member of the Medical Research Support
(Medicres) group and Global Science and Technology Forum.
REFERENCES
[1]
[2]
[3]
[4]
[5]
J. Sabater and C. Sierra, “REGRET: reputation in gregarious societies,”
in Proc. the Fifth International Conference on Autonomous Agents,
Montreal, Quebec, Canada, 2001.
S. Nusrat and J. Vassileva, “Recommending Services in a Trust-Based
Decentralized User Modeling System,” Advances in User Modeling,
vol. 7138, pp. 230-242, 2012.
D. Rosaci, G. M. L. Sarné, and S. Garruzzo, “Integrating trust measures
in multiagent systems,” International Journal of Intelligent Systems,
vol. 27, pp. 1-15, 2012.
D. Rosaci, “Trust measures for competitive agents,” Knowledge-Based
Systems, vol. 28, pp. 38-46, 2011.
M. Indiramma and K. Anandakumar, “TCM: 'A Trust Computation
Model for collaborative decision making in Multiagent System,”
International Journal of Computer Science and Network Security, vol.
8, pp. 69-75, 2008.
G. Zacharia, A. Moukas, P. Boufounos, and P. Maes, “Dynamic pricing
in a reputation brokered agent mediated knowledge marketplace,” in
Proc. the 33rd Hawaii International Conference on System Sciences,
2000, pp. 1-10.
J. Sabater and C. Sierra, “Review on computational trust and reputation
models,” Artificial Intelligence Review, vol. 24, pp. 33-60, 2005.
A. Medić, “Survey of computer trust and reputation models—The
literature overview,” International Journal of Information and
Communication Technology Research, vol. 2, 2012.
V. Cahill et al., “Using trust for secure collaboration in uncertain
environments,” Pervasive Computing, IEEE, vol. 2, pp. 52-61, 2003.
G. Lu, J. Lu, S. Yao, and Y. J. Yip, “A review on computational trust
models for multi-agent systems,” The Open Information Science
Journal, vol. 2, pp. 18-25, 2009.
Y. Jung, M. Kim, A. Masoumzadeh, and J. B. D. Joshi, “A survey of
security issue in multi-agent systems,” Artificial Intelligence Review,
vol. 37, pp. 239-260, 2012.
Z. Noorian and M. Ulieru, “The state of the art in trust and reputation
systems: a framework for comparison,” Journal of Theoretical and
Applied Electronic Commerce Research, vol. 5, pp. 97-117, 2010.
L. Xin, Trust beyond reputation: Novel trust mechanisms for
distributed environments, Nanyang Technological University, 2011.
F. G. Mármol and G. M. Pérez, “Towards pre-standardization of trust
and reputation models for distributed and heterogeneous systems,”
Computer Standards & Interfaces, vol. 32, pp. 185-196, 2010.
T. Bhuiyan, A. Josang, and Y. Xu, “Trust and reputation management
in web-based social network,” Web Intelligence and Intelligent Agents,
pp. 207-232, 2010.
S. P. Marsh, “Formalising trust as a computational concept,” Doctor of
Philosophy, Dept. of Computing Science and Mathematics, University
of Stirling, United Kingdom, 1994.
G. Zacharia and P. Maes, “Trust management through reputation
mechanisms,” Applied Artificial Intelligence, vol. 14, pp. 881-907,
2000.
E. Majd received her Masters and Bachelor degrees in
2007, and 2009 respectively from Tehran, Iran. She is
currently pursuing her PhD in the field of mobile
agents and e-commerce at Faculty of Computer
Science and Information Technology, University of
Malaya, Malaysia. She worked as a lecturer at Islamic
Azad University, Tehran, Iran from 2009 to 2011.
Ms. Majd is also one of the reviewers of the journal
of Education and Information Technologies.
Author’s formal
photo
185