Vision-Based Leader-Follower Formations with

2009 IEEE International Conference on Robotics and Automation
Kobe International Conference Center
Kobe, Japan, May 12-17, 2009
Vision-based Leader-Follower Formations with Limited Information
Hyeun Jeong Min, Andrew Drenner, and Nikolaos Papanikolopoulos
Department of Computer Science and Engineering
University of Minnesota
Minneapolis, MN 55455
Email: {hjmin|drenner|npapas}@cs.umn.edu
Abstract— This paper presents a new vision-based leaderfollower formation algorithm where the leader’s trajectory is
unknown to the robots which are following. Formation schemes
in straight lines and diagonal formations are introduced which
are both stable and observable in the presence of limited views.
The algorithms are novel since they only use local image measurements through a pinhole camera to estimate the leader’s
position. This approach does not require specialized markings
nor extensive robot communications. The algorithms are also
decentralized. We apply an input-output feedback linearization
for system stability and utilize an Extended Kalman Filter
(EKF) for estimation. Simulations illustrate how the proposed
formation controls work. Real experiments utilizing multiple
miniature robots are also presented and illustrate the challenges
associated with noisy images in real-world applications.
I. I NTRODUCTION
In order to increase the autonomy of robots, improved
sensing and algorithms for processing the sensory data are
required. In our work the visual data is utilized to generate
formations based upon identifying robots. Leader-follower
formations are created and maintained in this work through
multi-robot tracking, in which each robot follows another
robot considered as a leader with a similar trajectory. The
visual data also assists in estimating the spatial relationship
between the leader and the follower. It would be ideal if all of
the available visual data and estimated positions were available in a centralized fashion to ensure that all members of the
formation had as much information as possible. However, it
is not always feasible for a number of reasons (bandwidth,
processing, etc.) to do this. Thus, each member of the
formation must be able to act autonomously in a distributed
fashion. Lawton et. al. introduced a decentralized formation
control in [1]. The work in [2] applies the decentralized
control approach for formations of unmanned aerial vehicles.
Typical research into leader-follower based formations
assumes that the kinematic model is fully known for all
the robots in the formation. In this work the formation
algorithm assumes no known linear and angular velocities
in the kinematic model for the leader. Instead, we consider
only the estimated relative position of the leader from the
follower for the leader’s kinematic model. This choice was
made as it is not feasible to estimate the velocity of the leader
from the forward facing camera mounted on the follower
without specialized markings on the leader or implicit communication of the leader’s intention. Assuming fully known
or estimated kinematic models for formation controls results
978-1-4244-2789-5/09/$25.00 ©2009 IEEE
in reliable formations. However it may be very expensive
or may not be possible to acquire the required models in
real-world applications.
Most research relies on an omnidirectional camera as
in [3] to supply their controllers with an estimate of the
leader’s velocity. While decentralized visual servoing by
feedback linearization was described in [4], the authors used
a panoramic camera based on optical flow. A leader-follower
linear controller was described in [5] and included both the
leader’s linear and angular velocities which are generally
unknown. However, the authors were able to estimate them
through communication. A cooperative control framework
was built from a simple controller and estimator in [6], and
the central control law was based on input-output feedback
linearization as well. In [7] work was proposed to observe
the centroid of the robot by feedback control via dynamic
extension. However, the system developed assumed that the
whole kinematic model was known for the leader-follower
formation including the heading orientation of the leader.
While mid-sized robots like the Pioneer are utilized in
pertinent work, we focus on small robots. A panoramic
camera may not be installed in miniature robots due to
the size issue in the state-of-the-art. However, miniature
robots are very useful for various missions in real-world
applications. A miniature camera like a monocular one is
required for these robots. Using a monocular camera without
proper assumptions dose not allow an estimation of the
leader’s velocity, but the camera fits to all kinds of robots,
especially smaller ones and its integration requires minimal
effort. In [8], a navigation function for stable formation is
proposed and is shown to globally converge, but the work
was restricted in a point-world simulation. Decentralized
stabilization of formations was presented in [9]. The authors
used pan-controlled cameras like ours. However, they used
fiducial marks to recognize the orientation of the leader and
included the leader’s velocity in their control formulation.
Lemay et. al. described the method for position assignment
in a formation for a group of robots using Breadth First
Search (BFS) in [10]. Other formation control schemes based
on creating spatially self-aggregate shapes were introduced
in [11] and validated with simulations. Leader-to-formation
stability was investigated by Tanner et. al. in [12].
This paper addresses a new formation algorithm that maintains a stable formation without using too many unrealistic
assumptions. Here are our assumptions: i) we use a single
351
monocular camera mounted on each robot, ii) we assume that
the robots are not communicating with each other, and iii) we
have no pre-specified marks for recognizing the leader. In the
line formation the follower tries to keep the specified distance
and bearing from the leader. In the diagonal formation the
robot moves diagonally keeping a specified angle from the
leader. The diagonal formation in our work is defined as a
follower moving diagonally with a certain, variable angle
from the measured relative bearing to the leader as shown in
Fig. 1. Changing the follower’s position diagonally to the
leader allows a follower to estimate the leader’s velocity
unlike the line formation case.
•
Decentralized Control - Rather than rely on static organizations of formations, each team member may be
at times a leader, a follower, or both. Each robot is
controlled by using its own local information.
II. L EADER -F OLLOWER F ORMATIONS
While most vision-based formations assume that the velocities for a leader are known through some methodologies,
we assume the leader’s velocities (linear and angular, respectively) are unknown. In this section, we introduce the
line formation without knowing the leader’s velocities. We
also provide the diagonal formation which provides more
information about a leader than the line formation.
A. Line Formation Control
Fig. 1.
Description of the movement of the follower in the diagonal
formation. The blue leader moves randomly and the red follower keeps
the specified distance (ld ) and angle (γ) at each time step. The distance l
and the bearing α are computed through the image measurements.
This is accomplished by applying input-output feedback
linearization, a well known methodology in control theory.
We also present the visibility issues for a monocular camera
and the estimate of the leader’s velocity. Without knowing
or estimating the linear and angular velocities of the leader,
the formation controls may result in missing the leader in the
next time step. The visibility is related to the velocity of the
leader and the available frame rate of the follower’s camera.
In order to localize the moving leader, we utilize a bounding
box extracted from consecutive frames acquired by a camera
mounted on the follower. The specific algorithms for the
image measurement of the leader are presented in [13]. The
relative distance (l) and the bearing (α) are computed by the
information of the x-projection, the relative depth, and the
centroid of the leader in each image. The algorithms have
several novel aspects which make them advantageous over
other similar approaches:
• Control with a monocular camera - The main issue
regarding formation controls is how to know or estimate
the leader’s velocity. Using a monocular camera without
depending on special markings is very challenging when
our objective is to estimate the leader’s state.
• No Special Marks or Communication - We only use the
image measurements from each follower to estimate the
leader’s position.
Let us suppose that L is the leader and F is the follower
as shown in Fig. 2. Through the image measurements, the
follower can measure the relative distance (l) and angle (α)
between two robots. We can estimate the followers’ velocities
as ẋF = νF cos θF , ẏF = νF sin θF , and θ̇F = ωF , where
xF , yF , and θF represent the pose of the follower, and νF
and ωF are the linear and angular velocities of the follower
robot, respectively. Using the image measurements (l and
α), the followers can estimate the leader’s position as in Eq.
(1). Since we assume that there is no communication and
the formation is decentralized, each follower is controlled
by using its own data. Thus, we only deal with formations
in a pair-wise fashion:
Xr
cos θF − sin θF
xF
xL
.
+
=
Yr
sin θF
cos θF
yF
yL
(1)
Here Xr and Yr denote the position of the leader in the
coordinate frame of the follower. The leader’s position along
the follower’s frame is represented by Xr = l cos α and
Yr = l sin α, where α = n2 − xi , n is the number of pixels
in columns of the images, l is the depth, and xi is the center
x position of the leader on an image. We then compute the
derivatives of the leader’s position as follows:
1
(xL (t) − xL (t − 1))
(2)
ẋL '
∆t
1
ẏL '
(yL (t) − yL (t − 1)).
(3)
∆t
For the line formation each follower controls its linear
and angular velocities to achieve the desired distance and
the desired bearing with respect to the leader. The output in the closed-loop control
p system is defined as yl =
T
l α
, where l =
(xL − xF )2 + (yL − yF )2 and
α = arctan(yL − yF , xL − xF ) − θF . We now derive the y˙l
from the equations l and α as follows:
y˙l = G1 ul + rl ,
(4)
where
352
G1
rl
T
− cos α 0
, ul = νF ωF
,
1
sin α −1
l
ẋL cos(θF + α) + ẏL sin(θF + α)
=
.
1
l (ẏL cos(θF + α) − ẋL sin(θF + α))
=
the image measurements (l and α) and the pre-specified
information (ld and γ), the new position (xF 0 , yF 0 ) of F 0
can be computed. The follower’s new global orientation to
be lined up with is represented by θF 0 . Since we know the
gradient, tan β, of the line passing through the points L and
F 0 , we are able to find the line equation. Also, we know the
desired distance ld . Using the aforementioned information,
we can place a follower at the desired position F 0 as defined
in Eq. (7) and Eq. (8), and at the desired orientation θF 0 as
in Eq. (9):
(a) Leader-follower at time t.
(b) Leader-follower at time t + 1.
xF 0 (t) = xL (t) ± ld | cos β(t)|
yF 0 (t) = yL (t) ± ld tan β(t)| cos β(t)|
Fig. 2. The changing information for the distance and angle as the time
changes.
θF 0 (t)
By applying input-output feedback linearization, the control velocities for the follower are given by
ul = G−1
1 (ya − rl )
where ya is an auxiliary control input given by
k1 (ld − l)
ya =
.
k2 (αd − α)
=
(7)
(8)
arctan (yL (t) − yF (t), xL (t) − xF (t)), (9)
where β(t) = θF (t) + α(t) + γ. In Eqs. (7) and (8), ± is
used to address the sign issues of these equations.
(5)
(6)
Here, k1 and k2 > 0 are the user-selected controller gains,
and ld and αd are the desired constant distance and angle
between the leader and follower. The applied input-output
feedback linearization guarantees that the output yl converges
T
to the desired value ld αd
. The closed-loop system
is stable under the assumption of |α| < σ, where σ is the
angle of view of a camera mounted on a robot.
B. Diagonal Formation Control
In this section we consider the diagonal formation control
which means that a follower robot attempts to maintain a
certain angle to the relative bearing with the leader robot.
To do this a follower is controlled by varying two angular
velocities at each time. A certain angle represented by γ
in Fig. 3 can be initially chosen. The further the follower
wants to be located diagonally to the leader, the larger angle
needs to be selected. As we described in Subsection II-A,
the vision-based line formation is stable, but the follower
may not correctly estimate the leader’s velocity. In the
leader-follower diagonal formation, the closed-loop system
is also stable as long as the follower robot can estimate the
leader’s velocity. The estimation of the leader’s velocity is
accomplished by changing the robot’s position in a diagonal
manner and align it with respect to the leader. The estimation
of the leader’s velocity and heading orientation is illustrated
in detail in Section III.
Instead of keeping a leader centered in the view of the
follower, followers stay at a pre-selected angle (γ) from a
leader as shown in Fig. 3. The image measurements from the
follower are the relative distance l and the relative bearing α.
For the diagonal formation case, we utilize the pre-specified
information: the desired distance (ld ) from the diagonally
chosen position for the follower, and the user-selected angle
(γ) in order to be placed diagonally to the leader. From
Fig. 3.
Description of the diagonal formation of two robots.
Now we discuss a control law for the diagonal formation. In this case the follower needs to move to the
new computed pose [xF 0 yF0 θF0 ]T . We use the same
kinematic model for a follower as in the line formation
case. The output in this p
control system is defined as yd =
T
d η
, where d = (xF 0 − xF )2 + (yF 0 − yF )2 and
η = arctan (yF 0 − yF , xF 0 − xF ). The geometric view for
d and η is also noted in Fig. 3. The derivation of d and η is
therefore done as follows:
ẏd = G2 ud ,
(10)
− cos (η − θF ) 0
where G2 =
and ud =
1
−1
d sin (η − θF )
T
(1)
(1)
. Since F 0 is ideally defined and not moved,
νF
ωF
we only have an input vector ud for a follower differing from
Eq. (4).
In Eq. (10) |η − θF | < σ − δ < σ, since α = η + δ − θF
and 0 < δ < π2 . Hence, the matrix G2 is also nonsingular.
By applying input-output feedback linearization, the control
velocities for the follower are given by
353
ud = G−1
2 yb ,
(11)
where yb is an auxiliary control input given by
k3 (0 − d)
yb =
.
k4 (θF − η)
(12)
As in Eq. (6), k3 and k4 are the user-selected controller
gains. This also guarantees that the output yd converges to
T
the desired position xF 0 yF 0
. To be lined up with the
desired heading direction θF 0 , we need the third control input
for the diagonal formation defined as
(2)
ωF =
1
(1)
(α − ωF ∆t ).
∆t
(a) ωL = 0.
The closed-loop diagonal formation system is still stable,
since the matrix G2 is invertible. The new relative bearing
(α0 ) between L(t+1) and F 0 can be observed from the angle
of view of the follower’s camera.
C. EKF Estimator
To estimate the global state, the EKF is one of the most
well-known algorithms for stochastic estimation from noisy
sensor measurements. The method is based on the accurate
measurements and assumes Gaussian noise for noisy sensors.
The state for the estimation consists of the position and
velocities of the leader
and the pose of the follower, and is
T
represented by s = xF yF θF xL yL ẋL ẏL
.
The EKF propagation is shown in Eq. (13), where s̃ is the
error of the state and P is the covariance matrix:
s̃k+1|k
Pk+1|k
In the propagation
follows:

1 0
 0 1

 0 0
Φk = 
 0 0

 0 0
0 0
=
=
Φks̃k|k + Gk e
T
ΦkPk|k Φk +
(13)
Gk Qk|k GTk .
(14)
e ∼ N (0, Qk ) and Φk is represented as
δt1
0
1
0
0
0
0
δt1
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0 νF δt2 sin θF
1 −νF δt2 cos θF
0
1
(b) ωL > 0.
(c) ωL < 0.
Fig. 4. The red robot and the blue robot show the follower and the leader at
times t and t + 1, respectively. The square box below each figure represents
the image plane grabbed from the camera mounted on the follower robot
in each case.
two formations are stable and observable. The visibility from
a pinhole camera requires the following equation:
σ
|α0 | = | − θF 0 + arctan (yF 0 − yL , xF 0 − xL )| < , or
2
(15)
νL ∆t sin (θF 0 − ωL ∆t )
0
| tan α | =|
|
ld + νL ∆t cos (θF 0 − ωL ∆t )
(16)
σ
<| tan |
2
where α0 is the relative bearing between L(t + 1) and F (t +
1), σ is the maximum available angle of view of an involved
camera (in our case and in most other monocular cameras’
cases σ ≈ π2 ), and L0 is the leader’s position at time t + 1.
As we can see in Eq. (16), the desired distance ld is inverse
proportionally the angle of view.




.



The EKF update requires the measurement model which
includes the distance and the relative bearing, l and α
described in Subsection II-A (between two robots).
III. V ISIBILITY AND L EADER E STIMATION
Since the follower has no information about the heading
direction or the velocities of a leader, the leader’s positions
in the following images may not be predictable. Fig. 4
illustrates the cases showing different poses of the leader with
respect to the follower as the leader moves in the diagonal
formation. In this figure the follower is controlled by the
control law described by Eq. (5) and Eq. (11). After the
leader moves, the follower detects the leader’s movement at
time t+1. As long as the time difference ∆t is small enough
to guarantee the detection of the leader’s motion (inequality),
the leader is in the viewing angle in all cases.
Obviously G1 and G2 defined in Subsection II-A and
Subsection II-B are invertible, and this results in that these
(a) Line formation
(b) Diagonal formation
Fig. 5. Description of the image measurements from the follower. These
show the leader’s heading at each pertinent step.
The proposed formation controls are highly related with
the velocities of the leader because of the visibility issues.
Nevertheless, they rarely impede the application of our
methods in real-world environments compared to other relevant formation control schemes. The line formation in this
paper allows fast control of the follower, while the diagonal
formation provides more reliable estimation of the leader.
In the diagonal formation the third control input guarantees
the follower to be lined up with the leader, while in the
line formation its movement compensates for the relative
bearing of the leader. As in Fig. 5, the diagonal formation can
estimate the heading direction of the leader by measuring the
relative bearing α. The leader’s heading direction is simply
354
categorized in three ways: left, center, and right. The heading
direction is predicted as follows:

 L if αt < −T
C if −T < αt < T ,
H(t) =

R if αt > T
where L, C, R, and T are left, center, right, and the threshold
angle, respectively. We can also estimate the linear and
angular velocity of the leader as follows:
p
ld2 + l2 − 2ld l cos α
(17)
νL =
∆t
1
{θF 0 − arctan (l sin α, l cos α − ld )}. (18)
ωL =
∆t
were caused from the different frame rate of the cameras. In
the diagonal formation, the constant angle for the formation
,γ, was defined as π4 in Figs. 7(c) and 8(b). The desired
distance from the new follower’s position was ld = 0.14m.
(a) Trajectories for the line formation.
(b) Velocities for the follower.
(c) Trajectories for the diagonal formation.
(d) Velocities for the follower.
IV. E XPERIMENTAL R ESULTS
We have simulation results to show how the line formation
and the diagonal formation controls work. For the real-world
experiments, we used multiple Explorers, each equipped with
a camera, analog video transmitter, Bluetooth hardware for
communicating commands and sensor readings, and differentially driven wheels. Fig. 6 shows the three Explorer robots
used in the experiments, which were developed in the Center
for Distributed Robotics at the University of Minnesota.
Fig. 7. The simulation results for the line and diagonal formation control
for two robots. In the trajectories the leader and the follower are shown in
red circles and blue squares, respectively. The cyan dashed triangles and
squares show the visibility of the follower and the expected F 0 position.
B. Results with Real Robots
Fig. 6.
Three Explorer robots used for the leader-follower formations.
A. Simulation Results
The simulation was executed in MATLAB, and the leader
has constant linear and angular velocities. Fig. 7 shows the
results in the case of one leader and one follower, and Fig.
8 shows the results in the case of two followers. The first
red circle represents the leader and each follower recognizes
the front robot as the leader. In the simulation the error rate
for the image detection was on the average 10% and the
maximum velocity for the follower was .08m/s. The linear
velocity of the robots is limited in order to reduce noise
associated with the image capture process.
As we can see in the simulation results for line formation
in Figs. 7 and 8, the followers could keep the same path
as the leader. The diagonal paths on the simulation results
in Figs. 7 and 8 are varying depending upon the relative
bearing between the leader and the follower. As in Fig. 3, the
angle between the leader and the generated position for the
diagonal formation was decided by Eq. (9), where the angle
is the addition of the follower’s angle, the relative bearing
between the leader and the follower, and the constant angle
for the formation. The varying positions of the followers
In real-world experiments we need accurate image measurements to estimate the position of the leader. The estimation of the leader’s state is executed by the EKF as in
II-C. Line formation control was run on three Explorers,
and Fig. 9 shows the result of the estimated trajectory from
the followers and the selected linear and angular velocities
by the proposed control law. In the experiments the leader
had constant linear and angular velocities of 0.013m/s
and −0.02rad/s, but the 3rd Explorer had to follow the
leader with nonlinear motions. The second Explorer, the
follower of the first Explorer, keeps moving and stopping
continuously, since its motion is governed by the controller
and it stops to estimate the leader’s next state. The frame
rate of the followers’ camera was about 0.5f rame/s as all
three Explorers were required to share a single frequency
for video transmission, resulting in delays from turning the
transmitters on and off.
For the diagonal formation control the leader had constant
linear and angular velocities of 0.011m/s and 0.02rad/s.
The estimated trajectories of the robots and the selected linear and two angular velocities of the 2nd follower are shown
in Fig. 10. The followers could keep the leader in the camera
view, which is a challenging problem from an observability
perspective. As we can see from the simulation results, the
followers can follow the leader with the exact same trajectory
if we assume that the follower can acquire accurate image
measurements. The error rate for the formation paths was
355
(a) Trajectory
(b) Velocity
Fig. 10.
The results for the diagonal formation control using three
Explorers. These show the global position of the leader (blue circles),
estimated from the first follower (red squares), and the second follower
(cyan diamonds). Fig. 10(b) shows the chosen velocities for the second
follower.
(a) Trajectory for a line formation.
leader’s heading information. The formations in our work
do not require communication between robots. Future work
will extend and improve the pose estimation for the leader.
VI. ACKNOWLEDGEMENTS
(b) Trajectory for a diagonal formation.
Fig. 8. The simulation results showing the trajectories for the three-robot
formations.The leader is shown by red circles while the follower are denoted
by blue squares and green hexagons, respectively. The green and yellow
dashed triangles represent the camera’s angle of view on each follower.
This material is based upon work supported in part by,
the U. S. Army Research Laboratory and the U. S. Army
Research Office under contract number #911NF-08-1-0463
(Proposal 55111-CI), and the National Science Foundation through grants #IIS-0219863, #CNS-0224363, #CNS0324864, #CNS-0420836, #IIP-0443945, #IIP-0726109, and
#CNS-0708344.
R EFERENCES
(a) Trajectory.
(b) Velocity.
Fig. 9. The results for line formation control using three Explorers. These
show the estimated global position of the robots from each follower. The
leader, the 1st , and the 2nd follower are represented by the blue circles, the
red squares, and the cyan diamonds. One may note the chosen linear and
angular velocities for the 2nd follower in 9(b). The distance, the linear and
angular velocities are represented in meters, m/s, and rad/s, respectively.
based on the leader extraction errors as computed on the
image plane, which is very challenging especially without
special marks.
V. C ONCLUSIONS AND F UTURE W ORK
In this paper, we presented vision-based line and diagonal
formation strategies for multi-robot systems. We introduced
new formulations which do not rely on estimated linear and
angular velocities of the leader, and also showed that the
proposed formation controllers are stable and provide good
target observability. Differing from other related research regarding leader-follower formations, the proposed controllers
are applied by utilizing the followers’ cameras which point
to the back side of the leader robot providing the missing
[1] J. R. T. Lawton, R. W. Beard, and B. J. Young, “A decentralized
approach to formation maneuvers,” IEEE Transactions on Robotics
and Automation, pp. 933–941, 2003.
[2] D. M. Stipanović, G. Inalhan, R. Teo, and C. J. Tomlin, “Decentralized
overlaping control of a formation of unmanned aerial vehicles,” in
IEEE Conference on Decision and Control, 2002, pp. 2829– 2835.
[3] L. Consolini, F. Morbidi, D. Prattichizzo, and M. Tosques, “Leaderfollower formation control of nonholonomic mobile robots with input
constraints,” Automatica, pp. 1343–1349, 2008.
[4] R. Vidal, O. Shakernia, and S. Sastry, “Formation control of nonholonomic mobile robots with omnidirectional visual servoing and motion
segmentation,” in IEEE Int. Conf. and Robotics and Automation, 2003,
pp. 584– 589.
[5] N. Cowan, O. Shakernia, R. Vidal, and S. Sastry, “Vision-based followthe-leader,” in IEEE/RSJ IROS, 2003, pp. 27–31.
[6] A. K. Das, R. Fierro, V. Kumar, J. P. Ostrowski, J. Spletzer, and
C. J. Taylor, “A vision-based formation control framework,” IEEE
Transactions on Robotics and Automation, pp. 813–825, 2002.
[7] G. L. Mariottini, F. Morbidi, D. Prattichizzo, G. J. Pappas, and
K. Daniilidis, “Leader-follower formations: Uncalibrated vision-based
localization and control,” in IEEE ICRA, 2007, pp. 2403–2408.
[8] H. G. Tanner and A. Kumar, “Towards decentralization of multi-robot
navigation functions,” in IEEE Int. Conf. on Robotics and Automation,
2005, pp. 4132–4137.
[9] O. A. Orqueda and R. Fierro, “Robust vision-based nonlinear formation control,” in American Control Conference, 2006, pp. 6–11.
[10] M. Lemay, F. Michaud, D. Litourneau, and J.-M. Valin, “Autonomous
initialization of robot formations,” in IEEE Int. Conf. on Robotics &
Automation, 2004, pp. 3018–3023.
[11] J. Cheng, W. Cheng, and R. Nagpal, “Robust and self-repairing
formation control for swarms of mobile agents,” in The Twentieth
National Conf. on Artificial Intelligence, 2005, pp. 59–64.
[12] H. G. Tanner, G. J. Pappas, and V. Kumar, “Leader-to-formation
stability,” IEEE Trans. on Robotics and Automation, pp. 443–455,
2004.
[13] H. J. Min, A. Drenner, and N. Papanikolopoulos, “Visual tracking
for teams of miniature robots,” in 11th International Symposium on
Experimental Robotics, 2008.
356