Agents with Personality: Human Operator Assistants

Agents with Personality: Human Operator Assistants
Dr. Robert S. Woodly1, Michael Gosnell1, Dr. Jennie J. Gallimore2, and Dr. Sasanka Prabhala3
21st Century Systems, Inc.1
Wright State University2
Intel Corporation3
199 E 4th St, Suite B
207 Russ Engineering Center
20270 NW Amberglen Ct
Ft. Leonard Wood, MO 65473
Dayton OH 45435
Beaverton OR 97006
{robert.woodley, mike.gosnell}@21csi.com
[email protected]
[email protected]
Keywords: Software Agents, Personality Traits, Human-Agent
Interface
ABSTRACT
The Future Combat Systems (FCS) concept for the Department of
Defense (DoD) has made unmanned systems as a prime focus.
Currently, an Unmanned Aerial Vehicle (UAV) requires a team of
several individuals to effectively operate the UAV. The FCS
concept will change this to where a single operator is in control of
multiple UAVs. The increase in responsibility on the operator
brings an increase in cognitive load leading to the need for
automated assistants. This paper introduces a novel concept for
human operator assistance based on adding personality traits to
software agents. It has been shown that operator interfaces that
provide positive feedback and decision support in a personable
fashion lead to better performance than systems that provide only
factual information. This work shows how personality at the UAV
level and at the human interface level can be combined to produce
an interactive system to control the actions of multiple UAVs,
managing them without the need for intricate plans in place for the
UAVs to follow. This paper shows the origins of the work, the
architecture and concept, and mock-up depictions of the control
interface with descriptions of how the operator will be able to
interact with the UAVs.
1.
INTRODUCTION
Human Computer Interface (HCI) research has been primarily
focused on making it easier for the human to use the power of the
computer. One tool used for HCI is that of the intelligent software
agent (Decker and Sycara, 1997; Giampapa et al, 2000). A
software agent can be seen as a software program that interacts
with specific stimulus in a ‘rational’ manner. Agents are often
autonomous and can be as simple as sensor monitors that send an
alert when some threshold is exceeded, or as complex as being a
proxy for human intelligence or subject matter expertise (Weiss,
1998). 21CSI’s pioneering agent framework called the Agent
Enhanced Decision Guide Environment (AEDGE®) provides a
platform to build intelligent agent systems (Petrov and Stoyen,
2000).
The use of Unmanned Aerial Vehicles (UAVs) and Unmanned
Combat Aerial Vehicles (UCAVs) for military operations
continues to increase and these unmanned systems are now
considered an integral part of the armed forces (Defense Science
Board, 2004). Other Unmanned Vehicles (UMVs) include Space
Maneuverable Vehicles (SMVs), Unmanned Emergency Vehicles
(UEVs), Remotely Operated Underwater Vehicles (ROUV), and
Unmanned Ground Vehicles (UGV). These systems are considered
to be semi-autonomous, requiring human operators to supervise
and provide input. The design of these systems includes not only
the avionics for the vehicle itself, but the control stations that allow
SCSC 2007
interaction between the vehicle and the operator. The design of the
interaction between the UMV and the human is of critical
importance. Research has shown that introducing automation into
systems can cause adverse effects on overall system performance
including such problems as vigilance decrements (Heilman, 1995),
out-of-the-loop performance problems (Endsley and Garland,
1999), trust biases (Parasuraman et al. 1993), complacency
(Mosier and Skitka, 1996), reliability (Mosier and Skitka, 1996),
skill degradation (Mooij and Corker; 2002), and attention biases
(Sarter and Woods, 1995; Mosier and Skitka, 1996).
In the design and development of semi-autonomous systems,
researchers are focusing on the development of intelligent software
agents, for example agents that can sense information and select
best routes (Karim and Heinze, 2005) or use UCAV swarms to
track and kill targets (Price, 2006). The development of multicoordinated intelligent agents allows UCAVs to communicate with
each other. UCAV systems with computer agents can be
considered collaborative in nature and require the dispersion of
system knowledge and awareness among all collaborating agents
including humans.
The concept that we are presenting in this paper is the combination
of social agent swarm intelligence, autonomous agents with
personality, and human-agent interface with personality. The
resulting system would control multiple UAVs with a common
mission goal in such a way that the UAVs are able to adapt their
flight paths according to a stochastic construct yet still maintain
information feedback to the operator. The stochastic nature of the
UAV action is controlled via the personality parameters the
operator sets. Thus to the uninformed observer of the system, the
UAV appears to behave in an unpredictable manner making it
much harder to elude or shoot down, yet the operator has full
knowledge of the UAV’s goals. The paper is divided as follows:
Section 2 contains the background research that has led to the
development of the combined system concept; two independent
research lines from Wright State University and 21st Century
Systems, Inc. are detailed. Section 3 presents the concept and
architecture. Section 4 presents a mock-up interface with
descriptions of how the operator would interact with the UAVs;
full simulations were not yet ready for this publication. Section 5
draws conclusions about the system and its benefits.
2.
BACKGROUND RESEARCH
2.1. Swarming Agents with Personality Traits
In Woodley (2006) a concept known as Ants on the AEDGE
(AoA) presented a swarming ant algorithm incorporating
biologically inspired intelligence within each ant, such that the
system could predict the most likely location of a nondeterministically moving target. The target movement was based
on six human-like traits (see Table 1). The basis for using swarm
1139
ISBN # 1-56555-316-0
intelligence for predicting human actions (targets) is that it can use
probability and human-like traits to give a better prediction than
random chance. Biologically inspired intelligence was used in the
form of ant colony intelligence (Dorigo et al, 1996, 1997, 1999;
Gordon, 2004). For this initial study, a narrowed domain was used
in that the target was only moving in two dimensions with known
terrain. A set of six parameters were used that represent human
personality (in the defined domain space) that can be adjusted to
give distinct personalities to the agents (Montgomery and Randall,
2002). The agents with differing personalities move in slightly
different patterns from each other. As new information is
discovered (such as a spotter report giving a new location of the
target), the agents that are closest to the target will have their traits
strengthened as new agents are spawned while those farthest away
will have their traits reduced. This quasi-genetic algorithm will
then converge toward the real traits of the target, thereby, allowing
a prediction of the human operator’s actions (Lin et al, 1993).
Several agent classes were created to perform the various tasks in a
structure called the Ant Agent Analysis (AAA) module. To handle
the task of data inferencing, a concept known as belief fusion
(Jøsang 2002) was used.
combination of all six parameters produces a non-trivial potential
map that the Ant Agent must generate a path from. Furthermore,
by using a quasi-genetic algorithm certain paths can be trimmed
from the possible paths to allow the algorithm to better locate the
target.
2.1.1.3. Probability agents
The probability agent injects randomness into the ant motion. The
randomness creates variation in the movements of different ants. In
this way, a large area may be covered by the ant swarm, allowing
for the consideration of multiple movements that the target may
take (Doerner et al, 2004). It also emulates the changes a real target
may use to confuse a tracking algorithm. In this case, human-like
traits using probabilistic parameters are used as shown in Table 1.
Each probability agent is assigned to a particular ant, however, the
group of probability agents all communicate with each other. This
allows the distribution on the parameters to be monitored, such that
the ants are not biased in their motion (Di Caro and Dorigo, 1998).
The probability parameters are specific values within [0, 1] which
indicate the degree of trait exhibited for each trait in Table 1.
Table 1. Ant agent probabilistic parameters
2.1.1. Functional Description
•
•
There are four types of agents in the AAA which are activated in a
sequence when the needed. The agents are: ant agents, analysis
agents, probability agents, and ant track analysis agents.
•
2.1.1.1. Ant agents
When an ant is called, the current state of the environment is given
to the ant which initializes all the environment variables. At the
same time, the other agents in the system are also activated to
provide analysis for the ant agent. From this information, the ant
forms the goal of what part of the terrain it wants to investigate and
plots a path to get there (Schoonderwood et al, 1996). This path is
then sent back to the analysis agents that provide the feedback as to
the feasibility and quality of the selected path (Bullnheimer et al,
1999). When a feasible path is found, the ant moves along the path,
leaving a pheromone trail.
2.1.1.2. Analysis agents
The analysis agent helps to evaluate and investigate local
terrain/topological information along a circular area of coverage
(Reimann et al, 2002; Maniezzo and Colorni, 1998). This agent
gets all the local, raw data it needs from the Environment Model
(EM) via the ant agents when it is triggered. The information is
processed by considering several factors, such as Observation,
Cover and concealment, Obstacles, Key terrain, and Avenue of
approach (OCOKA) and Cross Country Movement (CCM).
The analysis agent is the main link between the probabilistic
parameters and the environmental parameters. The environmental
model contains the OCOKA and CCM information in the form of
rankings for each factor for a particular grid location. In the current
prototype, this information is hard-coded, but it would be relatively
easy to develop an automated rank assignment tool. Each of the
probabilistic parameters sets a range of possible conditions that the
agent is allowed to investigate. For example suppose an agent is
assigned a high braveness factor, allowing movement into risky
areas. However, also suppose it has a high alertness parameter such
that it is likely to be very aware of the location of the sensor that
may detect it. The resulting fusion of the two parameters would
drive the agent to move toward areas of low observability. The
ISBN # 1-56555-316-0
•
•
•
Braveness – to determine the risk level of the target
Alertness – to determine the awareness of the target (e.g., if it
recognized the presence of the reconnaissance vehicle)
Stealthiness – to determine how likely the target is to seek
cover
Aggressiveness – to determine how directly the target wants
to achieve its goal
Cunning – to determine how likely the target is to change its
parameters
Leadership – to determine how likely the target is to follow
previous pheromone trails
2.1.1.4. Ant track analysis agents
The main purpose of the Ant Track Analysis Agents (ATAA) is to
determine if the ant swarm has adequately covered the terrain for
possible motion of the target. The ATAA receive the plans from all
the ants in the swarm as to what areas are intended to be covered
and what areas are actually covered. The ATAA will determine if
an ant is needed to cover a particular zone that falls within the
ability of the target. The adequateness is determined by means of a
density coverage estimate (Stützle and Hoos, 2000). The ATAA
attempts to keep a uniform coverage of the possible area that the
target may move. The result of the uniform coverage is that the
ants will congregate in the areas that are most likely to contain the
target. When an area is determined to be under-covered, the ATAA
will initiate the Ant Agent. The agent takes the information about
where the ant is needed and passes the information to all the
needed agents (Babaoglu et al, 2002; Dasgupta, 2003).
2.1.2. AoA Result Summary
Results are shown in prototype examples, illustrating that it is
possible to have a computer learn where the target is moving. The
direct military use is that of enemy intent. A set of probabilistic
personality parameters may be tuned via sensed information to
accurately predict the actions of the enemy target. So, given
situation reports and sensor information of an enemy mobile
missile launcher (MML) driver, for example, it may be possible to
predict where that MML will be in a given amount of time. The
1140
SCSC 2007
concept is to create a swarm of intelligent agents tuned to slightly
different values of unknown parameters in the domain of the
agent’s knowledge. Given the large space of possible actions, the
ability of the human observer to predict the actions would be
intractable. However, as new information becomes available, the
parameters may be tuned such that the likelihood of possible
actions is reduced, making the problem of predicting the
adversary’s actions tractable by the human observers.
The agents in the AoA concept can be programmed to cover the
maximum range of the target vehicle to show the extent to where
the target may move. This, however, would be just a blind
estimation; the complexity of the algorithm would be wasted. The
real power of the ants is when there is additional information
available. The AoA technology is able to incorporate esoteric
information such as the aggressiveness of the driver of a MML in
determining where the target will move. Furthermore, the ants
learn new information with each sensor report to better profile the
target. The technology can incorporate asset data for the target
such as depots and cities. The technology can incorporate the
hunches of the operator by allowing the operator to add
information in places where he thinks the target will go and he is
also able to adjust the probability parameters to how he thinks the
target will behave. The operator can view the likely locations in
time up to a point where the entropy has become too large for any
useful meaning. The AoA concept is a tool that works for and with
the operator to predict ground movements for multiple targets.
Figure 2. Refined estimate of MML location after sensor
update
Figure 3. Ant trails as the group heads for the stealth
area
2.2. Collaborative Agents with Personality for Supervisory
Control
Figure 1. The ant pattern showing large group dispersal
The screen captures shown are from the prototype software. Figure
1 shows the ant pattern indicating a large group moving toward the
most likely stealth location, with some dispersing in other
directions. The green arc represents the boundary to the stealth
area. Last detection was an extended amount of time ago, indicated
by the large dispersal of ants. Figure 2 shows that the sensor has
detected the MML, and the system has removed ant trails that are
no longer valid. The ant algorithm therefore has learned important
information about the target and has refined the parameters to
provide a better estimate of the target’s track. Figure 3 shows the
ant trails as the ants head into the stealth area. The red diamond
represents the last detected location of the MML. Notice the ant
that has changed direction, this could be due to a cunning
parameter change. The current simulation is limited by the number
of ants it allows. Future renditions will have many more ants
showing a larger coverage.
SCSC 2007
Gallimore and Prabhala have focused on human-machine
collaboration during the supervisory control of UMVs (Gallimore
and Prabhala, 2006; Prabhala and Gallimore, 2005). They
postulated that developing computer agents with personality may
enhance collaboration. Important questions that need to be
answered are (a) is collaboration more effective if the machine
agent is given personality and (b) how do we make humanmachine communication more similar to human-human
communication. In other words how do we model factors such as
personality, facial expression, posture, nonverbal cues, direction of
gaze, and vocal cues that contribute to the degree of social
presence in a face-to-face communication between humans, in
human-machine interaction?
Gallimore and Prabhala (Gallimore and Prabhala, 2006; Prabhala
and Gallimore, 2005) developed an approach to operationally
define personality characteristics that can be modeled into
computer agents such that it will elicit the perception of the
personality when human agents are interacting during a
collaborative task. The research was conducted in three phases.
The first phase was to identify actions, language and behaviors that
human subjects indicate give rise to their perceptions of
personality during collaborative tasks. This requires models of
1141
ISBN # 1-56555-316-0
personality. There is significant research on the development of
personality trait models. The goal of a trait models is to find a
small number of independent dimensions (factors) or
characteristics also known as traits (extraversion vs. introversion)
that account for as much variation in personality as possible. A
well established trait model is The Big Five Factor Model
(Goldberg, 1990).
The Big Five Factor Model uses five factors that are considered
central traits to personality. They are: I. Extraversion vs.
Introversion, II. Agreeableness, III. Conscientiousness, IV.
Emotional Stability vs. Neuroticism, and V. Intellect or Openness.
To define the central traits more accurately, each central trait is
subdivided into six sub-traits or facets (see Gallimore and Prabhala
(2006) for details). The Big Five Factor Model was utilized for this
research.
The second phase was to develop computer agents with personality
within the complex UCAV domain and determine if human
operators perceived personality in the agents. The third phase was
to determine if there would be a difference in performance when
users interacted with agents during a UCAV supervisory control
task.
2.2.1. Phase I: Identifying Actions, Language, and Behaviors
During Phase I, human subjects were asked to identify actions,
language, and behaviors which represented each personality subtrait in the Big Five Factor Model. Subjects provided their
impressions of either 1) computer characters in a computer game,
or 2) real team members they worked with on a project for at least
three months (1 academic quarter), or 3) what they would expect in
an ideal team member (Prabhala and Gallimore, 2005). Responses
were gathered from 72 people. Various actions, language, and
behaviors were presented to the subjects and they gave their
impressions for the 6 sub-traits related to the central trait
extroversion. (See Prabhala and Gallimore (2005) for additional
sub-trait data.)
This example points out that computer agent must have multiple
ways of interacting with human agents. The agents created in this
study communicated with humans via presentation of visual,
auditory, and tactile information such that a multi-modal approach
was used. To provide tactile input, a tactile vest was designed
using 8 pager motors and the motors were placed in different
locations on the torso. The agents were not given any visual facial
or body characteristics to avoid impressions based on stereotypes.
Future efforts will incorporate facial features.
A discrete simulation was developed to provide the ability for
human subjects to interact with the two computer agents, CAP-A
and B, in a UCAV supervisory control task. There were many
differences between the two agents. For example, in CAP-A, the
computer agent greets the human operator using the operator’s
name in a friendly tone, whereas CAP-B greets the human operator
by just saying hello in a monotone voice. The no-personality
condition gives no verbal greeting. It is important to note that the
simulation events were identical (i.e., targets to kill, etc.) and the
differences in personality were based on how the computer agent
interacted with the human agent via visual, auditory, and tactile
communication. CAP-A was modeled to be high in extroversion,
agreeableness, conscientiousness, intellect, and emotional stability
(i.e., low neuroticism). CAP-B was modeled to be lower on each of
these dimensions.
2.2.2. Phase II: Developing Computer Agents with Personality
The actions, language, and behaviors identified in Phase I were
categorized into modeling attributes and communication types
(verbal vs. non-verbal). Because personality traits of the Big Five
Factor Model are measured between the two ends of a continuum
(e.g. extraversion vs. introversion or friendly vs. unfriendly) there
are theoretically a large number of combinations of personality
types in humans. However, creating computer agents with enough
differences to create many distinct personalities becomes difficult
because behavior can be subtle.
As a starting point, two agents were developed on the extreme ends
of the continuum, Computer Agent Personality (CAP) A and CAPB. Refer to Gallimore and Prabhala (2006) for additional details on
how the computer agent personalities were modeled. As an
example of how each agent may be different, consider the trait
extroversion vs. introversion. If a computer agent wants to draw
the attention of the human operator, an extroverted computer agent
may provide very obvious visual indicators, use assertive verbal
phrases (e.g. telling the person to pay attention), or use physical
contact (tap the person on the shoulder). On the opposite extreme,
an introvert computer agent would not use physical contact or
make assertive verbal phrases, but rather would provide simple
visual indicators or would make verbal alerts less often.
ISBN # 1-56555-316-0
Figure 4. Snapshot of the UCAV control station in a
SEAD mission
The simulation was developed to allow human agents (subjects), in
collaboration with computer agents, to supervise UCAVs in a
Suppression of Enemy Air Defenses (SEAD) mission. The SEAD
mission required detection, location, identification, and destruction
of enemy air defenses and consisted of 4 UCAVs traveling from
the base location along individual predetermined flightpaths. The
flightpaths were made up of waypoints connected by lines as
illustrated in Figure 4. Waypoints are destinations on the map
panel and each UCAV moves from one waypoint to the next along
the lines connecting the waypoints. Waypoints can be used to set
the UCAV’s airspeed, altitude as well as heading. Airspeeds,
altitudes, and flightpaths were all pre-programmed before the
mission began, but could be changed by the user at any time during
the mission. The user interacted with computer agents in the
simulation and rated agent personalities following the mission
based on the Big Five Factor Model. Results of this phase
1142
SCSC 2007
indicated that subjects perceived the agents to have personality and
that the personalities were different.
2.2.3. Phase III: Empirical Evaluation
Experimental Phase III focused on empirical evaluation of humanmachine collaborative performance. In this phase, human-machine
collaborative performance of the two computer agent personalities
modeled in Phase II was compared to that of an agent that does not
have a personality. Therefore, the independent variable (Computer
Agent Type) investigated was: CAP-A, CAP-B, and Computer
Agent with No Personality (CAP-NP). The objective performance
data were captured during the same simulation run as that
described in Phase II so the method is the same. The dependent
variable is an operator’s simulation score based on points assigned
for completing specific tasks of identifying targets, destroying
targets, maintaining altitude and airspeed, and returning to base on
time.
2.2.4. Collaborative Agent Result Summary
Participants’ performance was analyzed using analysis of variance
(ANOVA). The alpha criterion was set to 0.05. There was a
significant effect of simulation score (F(2, 22) =17.42, (p< 0.0001).
Post-hoc Tukey’s test conducted indicated a significantly higher
mission score in CAP-A (X=2047.1) and CAP-B (X=1308.3)
compared to CAP-NP (X=-265.4). Performance was also examined
by looking at individual subtasks in terms of identifying targets,
destroying targets, maintaining altitude and airspeed, and returning
to base on time. A summary of these results is shown in Table 2
which presents the differences in scores between the three agent
conditions. The general finding was that performance was always
significantly better for CAP-A than CAP-NP. There were fewer
significant differences between CAP-B and CAP-NP. For number
of airspeed faults, performance was significantly better for CAP-A
than both CAP-B and CAP-NP. Agents with personality did impact
operator performance.
A study conducted by Prabhala and Gallimore (2005a) suggests no
significant difference in the perception of modeled personality
based on the operator’s culture and gender. Results of (Prabhala
and Gallimore 2005a) indicate that even though human operators
have a distinct personality of their own, their perceptions of
computer agent personalities were the same. Phase III subjects
indicated a clear preference for one of the personalities (CAP-A).
Table 2. Average differences in scores
Task
Portion high priority
targets correctly
identified
Portion low priority
targets correctly
identified
Portion high priority
targets correctly
destroyed
Portion low priority
targets correctly
destroyed
Number of Airspeed
SCSC 2007
faults
Number of Altitude
faults
Number Returned to
base early or late
Significant
2
Significant
1.09
Significant
0.33
Significant
1.67
0.67
0.42
3. COMBINED AGENT PERSONALITY CONCEPT
AND ARCHITECTURE
The concept behind the combined personality agent approach is
three fold: First, the autonomous behavior that is produced by the
swarm intelligence algorithm provides cognitive load reduction by
handling the task of route planning and task determination.
Second, the personality traits exhibited by individual UAVs will
make actions less predictable by enemy observers and allow the
potential for in-flight adjustments to individual UAV objective
decisions. Finally, the interface personality improves operator
performance in the ability to understand the actions of individual
UAVs, adjust for changing situations, and ultimately control lower
level tasks as the mission tasks increase. Figure 5 depicts the
concept.
Figure 5. Combined agent personality concept
3.1. UAV Behavior Characteristics
The behavior characteristics are similar to that of Ants on the
AEDGE described earlier. However, unlike the AoA concept of
tracking a vehicle, the behaviors in this case are that of scout
UAVs searching an area. The UAV squadrons are supplied with an
overall mission of reaching and identifying a target in a known
location while also looking for additional targets of opportunity.
The personality algorithm is similarly fashioned after the AoA
concept. Based on the OCOKA factors, a rating scale was created
(a 1 – 5 scale) for the terrain in the Region of Interest (ROI).
Specifically, the values for the traits of observability,
cover/concealment, obstacles, and key terrain (danger) were
assigned values upon the likelihood of that trait within the ROI.
CAP-A vs.
CAP-NP
18%
Significant
CAP-B vs.
CAP-NP
16%
Significant
CAP-A vs.
CAP-B
2%
29%
Significant
17%
Significant
13%
19%
Significant
7%
13%
After specifying the OCOKA ratings, agents were assigned
behavioral traits such as braveness, stealthiness, aggression, and
adventurous which define how they move toward their objective.
These are defined as:
21%
Significant
7%
14%
•
6
0.9
2.6
1143
Braveness – (How risky) Brave UAV moves into areas where
the likelihood of observation is high, likelihood of cover being
available is low, high in obstacles, high levels of danger. Less
Brave UAV would do the opposite.
ISBN # 1-56555-316-0
•
Stealthiness – (Hiding) Stealth is based on cover and conceal
and observation values. High stealth would be moving in areas
of high cover and concealment and low observation, and low
stealth the opposite.
These types of interactions need to allow the agents and humans to
work cooperatively. This cooperation is enabled through a correct
paring of personalities, and yields a smooth, efficient operation
when done successfully.
•
Aggression – (How UAV moves into final approach) Direct
approach versus indirect or hidden. Direct (aggressive) will use
highly observable routes. Indirect will be less observable.
3.4. Personalized Team Construction
•
Adventurousness – (Likelihood of investigating) Adventurous
agents will move into areas of high concealment, high
obstacles.
The ant algorithm then uses these traits as inputs to plan the motion
of the agent. By simply changing the levels of the traits a
completely new flight characteristic can be generated with high
level (intuitive) control.
3.2. Cooperative Assistant Personality
A key component to the overall system is the human-computer
interface. If we were to have each of the UAVs communicate to
the operator individually, the cacophony of information would
make the problem worse then ever. In order for the operator to
effectively use the UAV swarm, an assistant agent is needed to act
as the interpreter. The assistant agent cooperates between the
individual UAVs, monitoring their status, evaluating the UAV
plans with the mission goal, and reporting the data to the operator.
A specific research focus would be to determine how the operator
could tailor the system to allow the chance to personalize the UAV
team. The operator could select roles for each of the team members
(agents) such that they perform particular tasks. With the
personality adaptability on the UAVs, the operator could perhaps
fine tune the team to behave in an optimal manner.
4. CONCEPTUAL VIEW OF THE COMBINED
PERSONALITY APPROACH
This section presents a conceptual view of the combined
personality approach. The software for many of the components
already exists but requires some integration. The following screen
captures are from the base software containing the ant algorithm.
Figure 6 presents a UAV exhibiting potential stealthiness and
adventurousness in searching in a manner unique from the other
UAVs. In this instance, the terrain in the southwest could hide a
potential target, which the lone UAV is searching independent of
the potential targets toward the north, which have the attention of
the remaining UAVs.
Based on the supervisory control work presented earlier, it is seen
that the cooperative assistant would benefit from the inclusion of
personality in its communication with the operator. The main
difference between the previous work and the current work is that
the operator is acting in an even higher supervisory role in the
current scenario.
The cooperative assistant with personality communicates to the
user through multimodal feedback. The user should be able to
better understand the characteristics of the individual UAVs based
on the information provided by the assistant. For example, if a
UAV is becoming low on fuel because its level of stealthiness or
adventurousness has caused it to take too long to reach the
objective, the assistant will describe these issues to the user. The
user may then intervene and adjust the UAV’s parameters causing
it to be more direct in reaching the mission goal (or return to base).
3.3. Operator Interaction
An assistant agent’s personality will drive the interaction with the
human operator, and can vary depending on conditions. For
example, an assistant agent could tweak UAV behaviors and notify
the operator, in the event an operator was already busy with other
operational details. The level of the assistant’s personality will then
dictate how the agent communicates with the user. For example,
the assistant agent with high levels of extroversion, agreeableness,
emotional stability, conscientiousness, and intellect personality
might make UAV behavioral changes before human notification.
An agent with lower levels of these traits might first prompt and
request human input before UAV alterations are implemented.
ISBN # 1-56555-316-0
1144
Figure 6. One UAV exhibiting potential adventurous by
searching an area independent of the other UAVs
Figure 7 shows a single UAV heading directly to the threat zone,
indicated as a pink circle. This UAV is exhibiting high braveness
and aggression, approaching without regard to UAV safety or
detection by a potential target. Most other UAVs have stayed well
clear of the threat area, and others who may have the braveness
necessary to search the threat area are not proceeding with the
same level of aggressiveness as the lone UAV.
SCSC 2007
Finally, we presented a mock-up example of the system and the
types of responses the user may see. The system, when fully
operational, will provide a testbed whereby we can experiment
with changing situations and behavioral adjustments to find the
optimal team organization for a given operator personality.
References
Babaoglu O., Meling H., and Montresor A. (2002), “Anthill: A
Framework for the Development of Agent-Based Peer-to-Peer
Systems,” Proc. of the 22th International Conference on
Distributed Computing Systems, Vienna, Austria.
Bullnheimer B., Hartl R.F., and Strauss C. (1999), An Improved
Ant System Algorithm for the Vehicle Routing Problem. Annals of
Operations Research, vol 89, pp 319–328.
Figure 7. One UAV heads directly to the threat zone,
exhibiting high braveness and aggregation
In each of the figures the collaborative assistant agent would be
monitoring the actions of the UAV swarm. For example, the
assistant would likely provide information to the user that there is a
UAV moving in a potentially risky manner and would highlight the
UAV. The operator can continue to work high level tasks and can
give the assistant the freedom to act on the observations or require
that all actions be approved by the user. We are currently in the
process of constructing a relevant scenario and integrating the
components of the system.
5. CONCLUSION
We have described the concept of a combined personality
approach to the task of unmanned system supervisory control. The
concept uses swarm intelligence as the basis for applying
behavioral traits to individual agents such that agents will have
added capabilities to perform particular aspects of the mission. The
composite team of agents under the supervisory control of the
operator attains the mission objective in the emergent manner
typical of swarm intelligence. This emergent behavior, although
completely predictable by the operator, appears chaotic to the
uniformed (potentially hostile) observer, thus giving the UAV an
advantage that it did not previously have. The operator also has the
ability to provide high level commands and adjust the UAVs
personality to optimally perform a task.
A cooperative assistant agent provides the interface to the operator.
The assistant interacts directly with the UAV agents but
collaborates with the human operator who in the end has final
authority. The operator may set the assistant to have different
personality and/or different levels of control. For example, they
may set the assistant agent to always act and then inform, or
inform and wait for approval or instruction. As research in the area
of augmented computing improves to the point where we can
predict operator emotions and overload, adaptive computing
techniques can be used and the personality could react based on
these inputs with the goal of reducing operator workload. It has
been shown that by using an assistant with personality traits, the
operator may perform better during complicated tasks compared to
interaction without personality; however, further testing is
necessary. The combination of UAV behavioral traits and assistant
personality allows users to tailor the system to their comfort level.
SCSC 2007
Dasgupta P. (2003), “Improving Peer-to-Peer Resource Discovery
Using Mobile Agent Based Referrals,” Proceedings of the 2nd
AAMAS Workshop on Agent Enabled Peer-to-Peer Computing,
Melbourne, Australia, pp 41–54.
Decker K., and Sycara K. (1997), “Intelligent Adaptive
Information Agents,” Journal of Intelligent Information Systems, 9,
pp 239–260.
Di Caro G. and Dorigo M. (1998), “AntNet: Distributed
Stigmergetic Control for Communications Networks,” Journal of
Artificial Intelligence Research, vol 9, pp 317–365.
Doerner K., Hartl R., and Reimann M. (2004), Are COMPETants
More Competent for Problem Solving? – The Case of a Multiple
Objective Transportation Problem; Department of Production and
Operations Management, Institute for Management Science,
http://www.wuUniversity
of
Vienna,
wien.ac.at/am/Download/report50.pdf
Dorigo M. and Di Caro G. (1999), The Ant Colony Optimization
Meta-Heuristic. In D. Corne, M. Dorigo, and F. Glover, editors,
New
Ideas
in
Optimization.
McGraw-Hill.
http://citeseer.ist.psu.edu/dorigo99ant.html.
Dorigo M. and Gambardella L.M. (1997) Ant Colony System: A
Cooperative Learning Approach to the Traveling Salesman
Problem. IEEE Transactions on Evolutionary Computation, vol 1,
no 1, pp 53–66.
Dorigo M., Maniezzo V., and Colorni A. (1996), "The ant system:
optimization by a colony of cooperating agents," IEEE
Transactions on Systems, Man, and Cybernetics–Part B, vol 26, no
2, pp 29–41. http://citeseer.ist.psu.edu/dorigo96ant.html
Endsley, M.R., and Garland, D. J. (2000). Pilot Situational
Awareness in General Aviation. Proceedings of the 44th Annual
Meeting of the Human Factors and Ergonomics Society.
Gallimore, J.J. and Prabhala, S. (2006) “Creating Collaborative
Agents with Personality for Supervisory Control of Multiple
UCAVs,” Human Factors and Medicine Panel Symposium on
Human Factors of Uninhabited Military Vehicles as Force
Multipliers, Biarritz, France.
Giampapa J., Paoluc M., and Sycara K. (2000), “Agent
Interoperation Across Multi Agent System Boundaries,”
Proceedings of Agents 2000, Barcelona, Spain, June 3-7.
1145
ISBN # 1-56555-316-0
Goldberg, L.R. (1990). An Alternative “Description of
Personality”: The Big Five Factor Structure. Journal of Personality
and Social Psychology, vol 59, no 6, pp 1216–1229.
Gordon D. (2004), “Collective Intelligence in Social Insects,” from
http://ai-depot.com/Essay/SocialInsects.html.
Heilman, K. M. 1995. Attention Asymmetries. In R. J. Davidson
and K. Hug Dahl (Eds), Brain Asymmetry, 217–234. Cambridge,
MA: MIT Press.
Hoc, J. (2001). Towards a Cognitive Approach to Human-Machine
Cooperation in Dynamic Situations. International Journal of
Human-Computer Studies. vol 54, pp 509–540.
Jøsang A. (2002), “The Consensus Operator for Combining
Beliefs,” Artificial Intelligence Journal, vol 141(1-2), pp 157–170.
http://sky.fit.qut.edu.au/~josang/papers/Jos2002-AIJ.pdf.
Karim, S. and Heinze, C. (2005). Experiences with the design and
implementation of an agent-based autonomous UAV controller. In
Proceedings of The Fourth International Joint Conference on
Autonomous Agents And Multiagent Systems, July 25-29, 2005,
Utrecht, Netherlands, pp 19–26.
Conference in Human-Computer Interaction. Las Vegas, NV, CDROM Publication.
Price, I. (2005). Evolving self-organized behavior for
homogeneous and heterogeneous UAV or UCAV swarms. Master
Thesis, Air Force Institute of Technology, Wright Patterson Air
Force Base, AFIT/GCS/ENG/06-11.
Reimann M., Doerner K., Hartl R. (2002); Insertion based Ants for
Vehicle Routing Problems with Backhauls and Time Windows;
Department of Production and Operations Management, Institute
for Management Science, University of Vienna; Report No. 68.
Sarter, N.B., and Woods, D.D. (1995). How in the World Did We
Ever Get into That Mode? Mode Error and Awareness in
Supervisory Control. Human Factors, vol 37, no 1, pp 5–19.
Schoonderwood R., Holland O., Bruten J., Rothkrantz L. (1996),
“Ant-based load balancing in Telecommunication Networks,”
Adaptive Behavior, vol 5, pp 169–207.
Stützle T. and Hoos H.H. (2000), MAX MIN Ant System. Journal
of Future Generation Computer Systems, 16:889–914.
http://citeseer.ist.psu.edu/312713.html
Woodley, R. (2006). Ants on the AEDGE: Personality in software
agents. Proceedings of the 25th Army Science Conference (ASC),
November 27–30, 2006, Orlando, Florida, USA.
Lee, K.W., and Nass, C. (2003). Designing Social Presence of
Social Actors in Human Computer Interaction. Proceedings of the
CHI Conference on Human Factors in Computing Systems. pp
289–296.
Weiss G. (ed.) (1998), “Multiagent Systems: A Modern Approach
to Distributed Artificial Intelligence,” MIT Press, 1998.
Lin, F., Kao C., and Hsu C. (1993), “Applying Genetic Approach
to Simulated Annealing in solving some NP hard problems,” IEEE
Transactions on System Man Cybernetics, vol 23.
Maniezzo V. and Colorni A. (1998), “The Ant System Applied To
The Quadratic Assignment Problem,” IEEE Transactions on
Knowledge and Data Engineering, vol 11, no 5, pp 769–778.
Montgomery J. and Randall M. (2002), “Anti-Pheromone as a Tool
for Better Exploration of Search Space, “Proceedings of ANTS
2002.
Mooij, M., and Corker, K. (2002). Supervisory Control Paradigm:
Limitations in Applicability to Advanced Air Traffic Management
Systems. Proceedings of IEEE Digital Avionics System
Conference. vol 1, pp IC3.1–IC3.8
Mosier, K.L. and Skitka, L.J. (1996). Human Decision Makers and
Automated Decision Aids: Made for Each Other? In M. Mouloua
and R. Parasuraman (Eds.), Automation and Human Performance:
Theory and Applications. pp 201–220, Mahwah, NJ: Lawrence
Erlbaum Associates.
Petrov P. and Stoyen A. (2000), “An Intelligent-Agent Based
Decision Support System for a Complex Command and Control
Application,” Proceedings of the 6th IEEE International
Conference on Engineering of Complex Computer Systems,
Tokyo, Japan.
Prabhala, S., and Gallimore J.J. (2005a). Perceptions of Personality
in Computer Agents: Effects of Culture and Gender. Proceedings
of the Human Factors and Ergonomics Society 49th Annual
Meeting. Orlando, FL, pp 716–720.
Prabhala, S., and Gallimore, J.J. (2005). Developing Computer
Agents with Personalities. Proceedings of the 11th International
ISBN # 1-56555-316-0
1146
SCSC 2007