Target Tracking and Obstacle Avoidance for Multi

International Journal of Automation and Computing
7(4), November 2010, 550-556
DOI: 10.1007/s11633-010-0539-z
Target Tracking and Obstacle Avoidance for
Multi-agent Systems
Jing Yan1
1
2
Xin-Ping Guan1,2
Fu-Xiao Tan3
Institute of Electrical Engineering, Yanshan University, Qinhuangdao 066004, PRC
School of Electronic and Electric Engineering, Shanghai Jiao Tong University, Shanghai 200240, PRC
3
School of Computer and Information, Fuyang Teachers College, Fuyang 236041, PRC
Abstract: This paper considers the problems of target tracking and obstacle avoidance for multi-agent systems. To solve the problem
that multiple agents cannot effectively track the target while avoiding obstacle in dynamic environment, a novel control algorithm based
on potential function and behavior rules is proposed. Meanwhile, the interactions among agents are also considered. According to the
state whether an agent is within the area of its neighbors0 influence, two kinds of potential functions are presented. Meanwhile, the
distributed control input of each agent is determined by relative velocities as well as relative positions among agents, target and obstacle.
The maximum linear speed of the agents is also discussed. Finally, simulation studies are given to demonstrate the performance of the
proposed algorithm.
Keywords:
1
Target tracking, obstacle avoidance, potential function, multi-agent systems.
Introduction
In recent years, there is an increasing amount of research
on the subject of unmanned system networks (UMSN)[1−4] .
These networks can potentially consist of a large number
of agents, such as unmanned underwater vehicles (UUV),
unmanned aerial vehicles (UAV), and unmanned ground
vehicles (UGV). UMSN provide numerous applications in
various research fields, such as building automation, mail
delivery, surveillance, and underwater exploration[5] . The
advantages of UMSN over a single agent system include
reducing cost and increasing robustness.
Various military and civil applications often require
agents to move autonomously in dynamic environment
where the target and obstacle are moving, rather than
to simply follow a pre-planned path designated by an offline mission-level planning algorithm. Because the potential function method is simple and intuitional, it has been
widely used in target tracking and obstacle avoidance for
a multi-agent system[6−9] . This method is based on a simple and powerful principle, first proposed by Khatib in [10].
The agents are considered as particles that move in a potential field generated by the target and obstacle present in the
environment. The target generates an attractive potential
field while any obstacle generates a repulsive potential field.
The agents immersed in the potential field are subject to
the action of a force, which drives them to the target and
keeps them away from the obstacle.
In [4], a theoretical framework for design and analysis
of distributed flocking algorithms for multi-agent dynamic
systems was provided. The cases of flocking in free-space
and presence of obstacle were both considered. According
to behavior rules, three algorithms were proposed. But the
Manuscript received August 3, 2009; revised November 30, 2009
This work was supported by National Basic Research Program
of China (973 Program) (No. 2010CB731800), Key Program of National Natural Science Foundation of China (No. 60934003), and Key
Project for Natural Science Research of Hebei Education Department
(No. ZD200908).
obstacle is assumed to be stationary, and then the proposed
algorithms would not be suitable for path planning of multiagent systems in dynamic environment where the obstacle
is moving. Meanwhile, the conventional potential function
method also exhibits shortages of local minima and goals
unreachable with an obstacle nearby. To solve these shortages, some improvements were proposed[7, 8, 11] . In [7], the
velocity of a target was included in the definition of the potential function. But the obstacle was still assumed to be
stationary.
In [8], the relative positions and velocities of the agents
with respect to target and obstacle were included in the
definition of the potential function. With the new potential function, agents were planned to track a moving
target while avoiding the moving obstacle. However, this
method is infeasible in practice. First, the measurement
of an agent0 s velocity is very prone to noises which would
directly affect the performance of the path planning. Second, position and velocity are different physical terms with
different units, so it is hard to estimate the weighing of the
position and velocity. Huang[9] proposed a velocity planning method for a single agent to track a moving target
while avoiding collision with moving obstacle. Although
this method is effective, it is only suitable for a single agent.
In some applications, such as surveillance, rescue, and underwater exploration, it often requires multiple agents to
perform a cooperative task. Therefore, the applications of
this method fall under prodigious restriction.
Based on the above considerations, a novel control
scheme for target tracking and obstacle avoidance of multiagent systems is proposed in this paper. The objective is
to make the agents track a moving target, while avoiding
collision with a moving obstacle and other agents. According to the state whether an agent is within the area of its
neighbors0 influence, two kinds of potential functions are
presented. They are extern-potential function and interpotential function, respectively. Based on the information
J. Yan et al. / Target Tracking and Obstacle Avoidance for Multi-agent Systems
of the target and obstacle, an extern-potential function is
designed to deal with target tracking and obstacle avoidance problems. Then, we implement the velocity planning
strategy based on the result in [9]. Meanwhile, the interpotential function, which is used to deal with the interactions among agents, is determined by the relative positions
between agents. Through the behavior rules, we dexterously link the extern-potential function and inter-potential
function together. Under the total potential function, a distributed control algorithm can be gotten, while considering
the maximum linear speeds of the agents.
An outline of the paper is as follows. In Section 2, the
preliminaries and problem statement of multiple agents are
established. In Section 3, the distributed control algorithm
is presented. Some simulation examples are provided to verify the effectiveness of the proposed approach in Section 3.
Conclusion is done in Section 4.
2
551
θao(i) : Angle of qao(i) .
qot = qtar − qobs : Relative position vector from obstacle
to target.
θot : Angle of qot .
qij = qj − qi : Relative position vector from agent i to
agent j.
Preliminaries and problem formulation
A graph G[11] is a pair that consists of a set of vertices
v = {1, 2, · · · , n} and edges ε ⊆ {(i, j) : i, j ∈ v, j 6= i} (i.e.,
the graph is, in general, directed and has no self-loop). The
graph is said to be undirected if (i, j) ∈ ε ⇔ (j, i) ∈ ε.
The adjacency matrix[11] A = [aij ] of a graph is a matrix
with nonzero elements satisfying the property aij 6= 0 ⇔
(i, j) ∈ ε. For an undirected graph, the adjacency matrix
A is symmetric (or AT = A ).
The set of neighbors of node i is defined by
Ni = {j ∈ v : aij 6= 0} = {j ∈ v : (i, j) ∈ ε}.
(1)
In multi-agent systems, the set of spatial neighbors of
agent i is defined as
Ni = {j ∈ v : kqj − qi k < r,
r > 0}
(2)
where r is the so-called distance of the neighbors0 influence.
In this paper, we consider the problems of moving a group
of agents toward a moving target, while avoiding collision
with the moving obstacle and other agents. Therefore, there
are two objectives. The first objective is to move the agents
to their final destination, and to avoid collision with obstacle in the dynamic environment. The second objective is
to avoid collision among agents. It is assumed that the
agents and target are mass points. Specifically, the obstacle is treated as a sphere shown in Fig. 1. Suppose that the
position and velocity of agents, obstacle, and target are all
measurable. The following notations are used to describe
the system:
qtar ∈ R2 : Position of the target.
vtar ∈ R2 : Velocity of the target.
qobs ∈ R2 : Position of the obstacle.
vobs ∈ R2 : Velocity of the obstacle.
q(i) ∈ R2 : Position of agent i.
v(i) ∈ R2 : Velocity of agent i.
θ(i) : Angle of v(i) .
qat(i) = qtar − qi : Relative position vector from agent i
to the target.
ψ(i) : Angle of qat(i) .
qao(i) : Relative position vector from agent i to the obstacle.
Fig. 1 The position and velocity vectors of the target, obstacle,
and agents
Consider a group of dynamic agents (or particles) moving
in 2D Euclidean space. The motion of agent i is described
as
q̇i = ui , i = 1, 2, · · · , n
(3)
where qi ∈ R2 denotes the position vector of agent i, and
ui ∈ R2 is its control input vector.
3
Distributed control algorithm
In this section, a novel distributed control algorithm is
proposed for solving the problem described in Section 2.
Based on whether an agent is within the area of its
neighbors0 influence, two kinds of potential functions are
presented. They can be denoted as extern-potential function and inter-potential function, respectively. The motion
planning is first studied for agent i outside the area of its
neighbors0 influence (kqj − qi k > r, j ∈ Ni ). Second, the
other case, when agent i is within the area of its neighbors0
influence (kqj − qi k < r, j ∈ Ni ), is investigated. Finally,
the total control inputs can be gotten by summarizing the
discussions above.
3.1
Distributed control algorithm when
agent i is outside the area of its
neighbors0 influence (kqj − qi k > r, j ∈
Ni )
When agent i is outside the area of its neighbors0 influence, there is no interaction between agent i and its neighbors. In other words, agent i is only under the target and
552
International Journal of Automation and Computing 7(4), November 2010
obstacle0 s influence. The target generates an attractive potential field, while the obstacle generates a repulsive potential field. Then, the potential function can be defined
as[12−14] :
1
T
Uatt(i) , ξ1 qat(i)
qat(i)
(4)
2
( 1
−1 2
ξ2 (ρ−1
if ρ(i) 6 ρ0
(i) − ρ0 ) ,
Urep(i) ,
(5)
2
0,
else
where Uatt(i) is the attractive potential function, and Urep(i)
is the repulsive potential function. ρ(i) denotes the minimum distance from agent i to the obstacle. The obstacle
can be assumed as a circle, and ρobs
° denotes the° radius of
the circle; then we can get ρ(i) = °qao(i) − ρobs °. ρ0 > 0
is the obstacle influence threshold. ξ1 > 0 and ξ2 > 0
are scaling factors for attractive and repulsive potentials,
respectively.
Then, the total potential function can be defined as
U(i) , Uatt(i) + Urep(i) .
(6)
Let qat(i) = [xat(i) , yat(i) ]T and qao(i) =[xao(i) , yao(i) ]T .
The relative motion between agent i and the target is described by q̇at(i) = [ẋat(i) , ẏat(i) ]T , and the relative motion between agent i and the obstacle can be described by
q̇ao(i) = [ẋao(i) , ẏao(i) ]T with
° °
ẋat(i) = kvtar k cos θtar − °v(i) ° cos θ(i)
From (6) and (7), we have
° °
(ξ1 xat(i) − µxao(i) )(kvtar k sin θtar − °v(i) ° sin θ(i) ) =
° °
(8)
(ξ1 yat(i) − µyao(i) )(kvtar k cos θtar − °v(i) ° cos θ(i) )
where
°
°−1 −1
°
° (ρ − ρ−1
µ = ξ2 ρ−1
0 )
(i) qao(i)
(i)
°
°
xat(i) = °qat(i) ° cos ψ(i)
°
°
yat(i) = °qat(i) ° sin ψ(i)
°
°
xao(i) = °qao(i) ° cos θao(i)
°
°
yao(i) = °qao(i) ° sin θao(i) .
° °
Rearranging (8) and assuming °v(i) ° 6= 0, we have
kvtar k(sin(θtar − ψ(i) ) − λ sin(θtar − θao(i) )) =
° °
°v(i) ° (sin(θ(i) − ψ(i) ) − λ sin(θ(i) − θao(i) ))
where
λ=
°
°
µ °qao(i) °
°
°.
ξ1 °qat(i) °
Define
° °
ẏat(i) = kvtar k sin θtar − °v(i) ° sin θ(i)
ψ̄(i) = arctan
° °
ẋao(i) = kvobs k cos θobs − °v(i) ° cos θ(i)
then (9) is rearranged as
° °
ẏao(i) = kvobs k sin θobs − °v(i) ° sin θ(i) .
sin ψ(i) − λ sin θao(i)
cos ψ(i) − λ cos θao(i)
θ(i) = ψ̄(i) + arcsin
Theorem 1. To make agent i track a moving target
while avoiding the moving obstacle as quickly as possible,
the control input for agent i should be planned such that
q̇at(i) points to the negative gradient of U(i) with respect to
qat(i) .
Proof. From [3], it is known to us that if an agent
requires to track a static target as quickly as possible, q̇(i)
should point to the negative gradient of U(i) with respect
to q(i) .
For a moving target, we can assume that the target is
static with relative to agent i. Then, the dynamic environment can be translated into quasi static environment, and
∗
the new state information can be expressed as follows: q(i)
∗
∗
is the new position of agent i, q̇(i) is its velocity, and U(i)
denotes the new potential function of agent i. Under the
condition in static environment, the control input agent i
∗
should be planned so that q̇(i)
points to the negative of gra∗
∗
dient of U(i)
with respect to q(i)
, if the agent i requires to
track the quasi static target as quickly as possible.
Finally, translating the quasi static environment into dy∗
∗
namic environment, we have q(i)
= qat(i) , q̇(i)
= q̇at(i) , and
∗
U(i)
= U(i) .
¤
Based on Theorem 1, it is necessary to make
∂U(i)
∂U(i)
ẏat(i) −
ẋat(i) = 0
∂xat(i)
∂yat(i)
(9)
(7)
where
ψ̄(i) = arctan
kvtar k sin(θtar − ψ̄(i) )
° °
°v(i) °
(10)
sin ψ(i) − λ sin θao(i)
.
cos ψ(i) − λ cos θao(i)
we should also consider another
° Meanwhile,
°
° ° case
°v(i) ° = 0. Thus, we can assume θ(i) = θtar , if °v(i) ° = 0.
From (10), the direction of an agent i can be gotten.
Then, we can plan the velocity for the i-th agent. With
the planned velocity, potential function U(i) will be large
enough but not infinite when agent i is within the area of
the obstacle0 s influence. Meanwhile, the potential function
will also be very small when qat(i) → 0, and it reaches 0
when qat(i) = 0. Differentiating U(i) in (6) with respect to
time t, we have
T
T
U̇(i) = U̇att(i) + U̇rep(i) = ξ1 qat(i)
q̇at(i) − ξ2 qao(i)
q̇ao(i) . (11)
With q̇at(i) , q̇ao(i) , qat(i) , and qao(i) defined above, U̇(i)
can be rewritten as
°
°
U̇(i) = ξ1 °qat(i) ° (kvtar k cos(θtar − ψ(i) )−
v(i) cos(θ(i) − ψ(i) ) − λ kvobs k cos(θobs − θao(i) )) =
°
°
ξ1 °qat(i) ° (kvtar k cos(θtar − ψ(i) )−
° °2
1
(°v(i) ° − kvtar k2 sin(θtar − ψ̄(i) )) 2 −
λ kvobs k cos(θobs − θao(i) )).
(12)
553
J. Yan et al. / Target Tracking and Obstacle Avoidance for Multi-agent Systems
To make U̇(i) < 0, we define
° °
°v(i) ° =[(kvtar k cos(θtar − ψ(i) ) − λ kvobs k
°
°
cos(θobs − θao(i) ) + ξ1 °qat(i) °)2 +
1
(13)
kvtar k2 sin2 (θtar − ψ̄(i) )] 2 .
° °
Substituting °v(i) ° into U̇(i) , we have U̇(i) =
°
°2
−ξ12 °qat(i) ° = −2ξ1 Uatt(i) . Then, we can find U(i) > 0
and U̇(i) < 0, so the system is stable. It means the agent
can track the moving target in the dynamic environment.
Based on the above discussion, the following corollary
can be gotten.
Corollary 1. For multiple agents to track a moving
target while avoiding collision with the obstacle under the
situation that there is no interaction between agents, the
disturbed control input for agent i can be planned as
° °
¡° °
¢T
u(i) = °v(i) ° cos θ(i) , °v(i) ° sin θ(i)
(14)
(j ∈ Ni ). The parameters in (17) are the same as those
in (5).
Remark 1. Although qdij denotes the desired distance
between agent i and agent j, the distance needs not strictly
converge to qdij in fact. The reason is that the total potential function is a combination of some different potential
functions, which are defined in Sections 3.1 and 3.2. Even
though, this result does not influence the final aim of this
paper because the objective of this paper is to accomplish
collision avoidance task but not the consensus problem, and
qdij > 0 for all agents means that the objective has already
finished.
The potential function Uret(ij) between agent i and agent
j can be illustrated in Fig. 2, and the repelling and attractive forces OUret(ij) are illustrated in Fig. 3. As qi > qdij ,
OUret(ij) > 0 means agent i will attract agent j; as qi = qdij ,
OUret(ij) = 0 means agent i and agent j can be balanced;
as qi < qdij , OUret(ij) < 0 means agent i will repel agent j.
where
° °
°v(i) ° = [(kvtar k cos(θtar − ψ(i) ) − λ kvobs k cos(θobs −
°
°
1
θao(i) ) + ξ1 °qat(i) °)2 + kvtar k2 sin2 (θtar − ψ̄(i) )] 2

° °

tar − ψ̄(i) )
 ψ̄ + arcsin kvtar k sin(θ
° °
, if °v(i) ° 6= 0
(i)
°
°
θ(i) =
v(i)
° °

θ ,
if °v(i) ° = 0.
tar
3.2
Distributed control algorithm when
agent i is under the area of its
neighbors0 influence (kqj − qi k < r, j ∈
Ni )
The distributed control algorithm in Corollary 1 does not
take the neighbors0 influence into consideration. Once the
distance between agent i and its neighbors is smaller than
r, there are interactions between agents. Therefore, the
neighbors0 influence must be taken into consideration.
Based on the above thought, another potential function
Ǔ(i) for the interactions between agents is defined. In this
potential field, agent i should be repulsed from its neighbors if they are nearly-spaced. Meanwhile, agent i should
also avoid the obstacle during this process. By this “virtual force”, agent i can escape from its neighbors0 influence
quickly. After escaping from its neighbors0 influence, agent
i will autonomously track the moving target under the algorithm in Corollary 1. In view of the transitory repulsive
process, the target0 s influence will not be considered during
the repulsive process. The potential function can be defined
as
X
Ǔ(i) =
Uret(ij) + Urep(i)
(15)
j∈Ni
Uret(ij) = In(qij ) +
Urep(i) ,
( 1
−1 2
ξ2 (ρ−1
(i) − ρ0 ) ,
2
0,
qdij
qij
if ρ(i) 6 ρ0
else
(16)
(17)
where qij = kqi − qj k and qdij > 0 are, respectively, the
actual and desired distance between agent i and agent j
Fig. 2
Potential function between agent i and agent j
Fig. 3
The forces produced by Uret(ij)
Remark 2. In conventional definition, the desired distance qdij < r. But in this paper, we define qdij > r. There
are several advantages associated with this definition. First,
the objective is to make agent i escape from its neighbors0
influence, and then it can autonomously move under the
algorithm in Corollary 1. In other words, the attractive
forces are not our expectation. Second, we define a large
constant qdij , and then agent i can also move toward the
desired position by the repulsive “force” and inertia. After
converging to the desired position, agent i can also escape
from the neighbors0 influence (qdij > r).
Agent i 0 s control input is along the negative gradient of
554
International Journal of Automation and Computing 7(4), November 2010
Ǔ(i) with respect to q(i) , and
u(i) = −Oq(i) Ǔ(i) = −
X
(
j∈N i
where
(
µ=
1
qdij ∂qij
−
)
− µqao(i) (18)
qij
qij ∂qi
°
°−1 −1
°
° (ρ − ρ−1 )2 ,
ξ2 ρ−1
0
(i) qao(i)
(i)
0,
if ρ(i) 6 ρ0
.
else.
The obstacle can be assumed as a circle, and ρobs denotes the radius
° of the circle, and then we can get ρ(i) =
°
°qao(i) − ρobs °. ρ0 > 0 is the obstacle influence threshold.
Theorem 2. With the control input in (18), agent i can
escape its neighbors0 influence when ρ(i) > ρ0 .
Proof. The potential function Uret(ij) is a sub-function.
ρ(i) > ρ0 denotes that agent i has already escaped from
the obstacle0 s influence.
Thus, we can choose Lyapunov
P
function as U = j∈N i Ǔ(i) . Differentiating U with respect
to time t, we have
U̇ =
n
X
Ǔ(i) =
i=1
n X
X
i=1 j∈N i
(
1
qdij ∂qij
− 2 )
u(i) .
qij
qij ∂qi
(19)
Substitute (18) into (19) when ρ(i) > ρ0 , we have
U̇ = −
n X
X
i=1 j∈N i
[(
1
qdij ∂qij 2
− 2 )
] < 0.
qij
qij ∂qi
Based on Lyapunov stability theory, we can get the following conclusion: if the desired position is known, the
agent i will be repulsed by its neighbors and reach the desired position.
¤
When ρ(i) > ρ0 , it means that agent i should avoid collision with its neighbors and the obstacle. By contradictions,
we can also get there is no collision. The proof is similar to
the result in [2], and hence omitted.
Remark 3. The maximum linear speed of the agents
should also be considered. First, we consider the situation
in Section 3.1. Similar to the analysis in [9], the convergence
is still guaranteed if the agent0 s maximum linear speed is
vmax . Then, we consider the situation in Section 3.2. Because, in this section, we ignore the target0 s influence, the
convergence of qao(i) to zero is still guaranteed regardless of
the limitation of the maximum linear speed.
The results are summarized by the following theorem.
Theorem 3. For multiple agents to track a target while
avoiding collision with moving obstacle and between agents,
the distributed control input for agent i can be planned such
that

T

 kv(i)k (cos θ(i), sin θ(i)) °,
°



if kqj − qi k > r ∩ °u(i) ° 6 vmax




P 1
qdij ∂qij
u(i) =
−
)
− µqao(i) ,
−
(

q
qij ∂qi

ij
j∈N
i

°
°



if kqj − qi k < r ∩ °u(i) °°6 vmax

°


vmax (cos θ(i) , sin θ(i) )T ,
if °u(i) ° > vmax
with
° °
°v(i) ° = [(kvtar k cos(θtar − ψ(i) ) − λ kvobs k cos(θobs −
°
°
θao(i) ) + ξ1 °qat(i) °)2 + kvtar k2 sin2 (θtar − ψ̄(i) )]0.5
θ(i) =

° °

tar − ψ̄(i) )
 ψ̄ + arcsin kvtar k sin(θ
° °
, if °v(i) ° 6= 0
(i)
°v(i) °
° °

 θ ,
if °v(i) ° = 0
tar
(
°
°−1 −1
°
° (ρ − ρ−1 ), if ρ(i) 6 ρ0
ξ2 ρ−1
0
(i) qao(i)
(i)
µ=
0,
else.
4
Simulation results
To test the performance of the proposed approach, numerical simulations are performed. The simulations are
done for three agents to track a moving target. The route
of the target is specified by:

 (3.2 − 1.2 cos t, 2 + 1.2 sin t)T , 0 6 t 6 π
2
qtar =
 (3.2 − 1.2 cos t, 3.2)T , π < t 6 3.14
2

π
π
 1.2(cos( − t), sin( − t))T , 0 6 t 6 π
2
2
2
vtar =
 (1.2, 0)T , π < t 6 3.3.
2
The initial states of the three agents are given as follows
(when t = 0)
q1 = (0, 0)T
q2 = (0.2792, 1.7232)T
q3 = (0.2666, −0.2870)T
v1 = v2 = v3 = (0, 0)T .
The simulation results are done for two cases. First, we
study the situation that there is no obstacle in the dynamic
environment. Then, we consider the situation that there is
an obstacle in the dynamic environment.
4.1
Three agents track a moving target
In this subsection, agents, which are affected by the target and neighbors, will track a moving target. Some parameters in the simulation are given as follows: ξ1 = 2, ξ2 = 1,
of ρobs = 0.2, ρo = 0.3, qdij = 0.5, and r = 0.2. The position trajectories of the three agents and target are given in
Fig. 4.
Fig. 4
Trajectories of the agents 1–3 and the target
J. Yan et al. / Target Tracking and Obstacle Avoidance for Multi-agent Systems
555
As shown in Fig. 4, the agents track the target from point
A to C. The 2nd agent catches up with the target at point
C. The other agents also track the target and keep a pimping distance with the target; the reason of this phenomenon
is that each agent should avoid collision with the other
agents.
4.2
Three agents track a moving target
while avoiding a moving obstacle
We consider the situation that the agents track a moving target while avoiding the obstacle in a dynamic environment. The obstacle is assumed to be circular with the
radius ρobs = 0.2 and the influence: ρo = 0.3. On the way to
the target, if sensors installed on agents detect the obstacle,
the agents regenerate a local trajectory to avoid the obstacle by using the algorithm in Section 3. First, an obstacle that flatly moves on the plane is chosen, and the initial
states of the obstacle are given as follows: qobs = (2.5, 1.3)T ,
vobs = (−1.1, 0)T .
As shown in Fig. 5, the agents move from point A to
D. At point B, an agent meets a moving obstacle, then
the agent moves quickly to escape the obstacle0 s influence.
Finally, the agents catch up with the target at point D.
The distances between agents are shown in Fig. 6 (a), and
there is no collision between agents (dij > 0). Because the
objective of this paper is to accomplish collision avoidance
task but not consensus, qij > 0 for any agent i means that
the objective has already been achieved. Fig. 6 (b) shows
the distances between agents and obstacle. It is obvious to
find that there is no collision between agents and obstacle
because the relative distances are all greater than zero.
To test the feasibility of the proposed approach, another
obstacle is chosen which vertically moves on the plane,
and the initial states of the obstacle are given as follows:
qobs = (2.5, 4.25)T , vobs = (0, −0.363)T . Meanwhile, the
target is virtual in this paper, and so it does not need
to avoid the obstacle. Fig. 7 shows the trajectories of the
agents and target. The distances between the agents are
shown in Fig. 8 (a), and Fig. 8 (b) shows the distances between agents and obstacle. Similarly to the results in Figs. 6
and 7, there is no collision between agents and obstacle.
5
Fig. 5 Trajectories of the three agents and target when there is
an obstacle which flatly moves on the plane
Fig. 6 The distances. (a) The distances between agents (d12
denotes the distance between agent 1 and agent 2, d13 is for
agent 1 and agent 3, and d23 is for agent 2 and agent 3; (b)
The distances between agents and the obstacle (d1 denotes the
distance between agent 1 and the obstacle, d2 is for agent 2, and
d3 is for agent 3)
Conclusion
Based on the potential function and behavior rules, a
novel control approach is proposed for target tracking and
obstacle avoidance of multiple agents in dynamic environment. Proper potential functions concerning with target,
obstacle and agents are chosen to design the new distributed
control algorithm. To avoid collision between the agents,
we also consider the interactions between agents. The simulation results are provided to verify the effectiveness of the
approach. Under the proposed method, multiple agents can
track a moving target while avoiding collision with moving
obstacle and between agents.
In this paper, the proposed approach is based on cooperative strategy. But, in real-life, some information cannot
be acquired, such as a competition game. Then, in the
future, we will use non-cooperative game theory to analyze the target tracking and obstacle avoidance problems
for multi-agent systems.
Fig. 7 Trajectories of the three agents and target when there is
an obstacle which vertically moves on the plane
556
International Journal of Automation and Computing 7(4), November 2010
[9] L. Huang. Velocity planning for a mobile agent to track a
moving target — A potential field approach. Robotics and
Autonomous Systems, vol. 57, no. 1, pp. 55–63, 2009.
[10] O. Khatib. Real-time obstacle avoidance for manipulators
and mobile robots. International Journal of Robotics Research, vol. 5, no. 1, pp. 90–98, 1986.
[11] C. Godsil, G. Royle. Algebraic graph theory, Graduate
Texts in Mathematics, New York, USA: Springer-Verlag,
vol. 207, 2001.
[12] D. Kolokotsa, A. Pouliezos, G. Stavrakakis, C. Lazos. Predictive control techniques for energy and indoor environmental quality management in buildings. Building and Environment, vol. 44, no. 9, pp. 1850–1863, 2009.
Fig. 8 The distances. (a) The distances between agents (d12
denotes the distance between agent 1 and agent 2, d13 is for
agent 1 and agent 3, and d23 is for agent 2 and agent 3; (b)
The distances between agents and the obstacle (d1 denotes the
distance between agent 1 and the obstacle, d2 is for agent 2, and
d3 is for agent 3)
Acknowledgement
The authors thank the anonymous reviewers for their
constructive comments and suggestions that improved the
quality of this paper.
References
[1] H. S. Su, X. F. Wang, Z. L. Lin. Flocking of multi-agents
with a virtual leader. IEEE Transactions of Automatic Control, vol. 54. no. 2. pp. 293–307, 2009.
[2] W. Li. Stability analysis of swarms with general topology.
IEEE Transactions on Systems, Man, and Cybernetics –
Part B: Cybernetics, vol. 38, no. 4, pp. 1084–1097, 2008.
[3] W. H. Huang, B. R. Fajen, J. R. Fink, W. H. Warren. Visual
navigation and obstacle avoidance using a steering potential
function. Robotics and Autonomous Systems, vol. 54, no. 4,
pp. 288–299, 2006.
[4] R Olfati-Saber. Flocking for multi-agent dynamic systems:
Algorithms and theory. IEEE Transactions of Automatic
Control, vol. 51, no. 3, pp. 401–420, 2006.
[5] L. Guzzella. Automobiles of the future and the role of automatic control in those systems. Annual Reviews in Control,
vol. 33, no. 1, pp. 1–10, 2009.
[6] W. Kowalczyk, K. Kozlowski. Artificial potential based control for a large scale formation of mobile robots. In Proceedings of the 4th International Workshop on Robot Motion
and Control, IEEE, pp. 285–291, 2004.
[7] G. V. Raffo, G. K. Gomes, J. E. Normey-Rico. A Predictive controller for autonomous vehicle path tracking. IEEE
Transactions on Intelligent Transportation Systems, vol. 10,
no. 1, pp. 92–102, 2009.
[8] T. T. Yang, Z. Y. Liu, H. Chen, R. Pei. Formation control
and obstacle avoidance for multiple mobile robots. Acta Automatica Sinica, vol. 34, no. 5, pp. 588–593, 2008.
[13] Y. Yoon, J. Shin, H. J. Kim, Y. Park, S. Sastry. Modelpredictive active steering and obstacle avoidance for autonomous ground vehicles. Control Engineering Practice,
vol. 17, no. 7, pp. 741–750, 2009.
[14] P. Ögren. Formations and Obstacle Avoidance in Mobile
Robot Control, Ph. D. dissertation, Royal Institute of Technology, Sweden, 2003.
Jing Yan received the B. Eng. degree in
automation from Henan University, PRC in
2008. He is currently a Ph. D. candidate in
control theory and control engineering at
Yanshan University, PRC.
His research interests include cooperative control of multi-agent systems and
wireless networks.
E-mail: [email protected] (Corresponding author)
Xin-Ping Guan received the M. Sc. degree in applied mathematics in 1991, and
the Ph. D. degree in electrical engineering in
1999, both from Harbin Institute of Technology, PRC. Since 1986, he has been at
Yanshan University, PRC, where he is currently a professor of control theory and control engineering. In 2007, he also joined
Shanghai Jiao Tong University, PRC.
His research interests include robust congestion control in communication networks, cooperative control
of multi-agent systems, and networked control systems.
E-mail: [email protected]; [email protected]
Fu-Xiao Tan received the B. Eng. degree in automation from Hefei University of
Technology, PRC in 1997, and the Ph. D.
degree in control theory and control engineering from Yanshan University, PRC in
2009. He is currently an associate professor
in Fuyang Teachers College, PRC.
His research interests include robust congestion control in communication networks,
cooperative control of multi-agent systems,
and networked control systems.
E-mail: [email protected]