Task Specific Motion Control of Omnidirectional

1
Task Specific Motion Control of
Omnidirectional Robots
João V. Messias and Pedro U. Lima
Instituto de Sistemas e Robótica
Instituto Superior Técnico
Av. Rovisco Pais 1, 1049-001 Lisbon, Portugal
{jmessias, pal}@isr.ist.utl.pt
Abstract—This work presents a framework for motion
control of an omnidirectional robot, that can be applied, for
example, in the robotic soccer environment. The basic
problems of posture stabilization and tracking of holonomic
robots are considered, to address the most common tasks of
these robots. More specific tasks require the interaction
between the robot and a moveable object in its environment:
in a robotic soccer setting, two such tasks are those of
intercepting a freely rolling ball and transporting the soccer
ball. Solutions are presented to each of these problems,
designed specifically for holonomic robots. Ball interception is
achieved through a solution that combines the concepts of
trajectory-tracking and proportional navigation. Moving the
ball is accomplished through a control scheme that determines
at each instant the necessary force that must be applied to the
ball, and then uses it as a reference for a hybrid force/position
control law that drives the robot. Finally, an obstacle
avoidance solution is also presented that can be readily
adapted for each task. These solutions are applied to the
ISocRob robotic soccer team.
Index Terms—Motion control; Moving object
interception; Nonlinear systems; Object manipulation;
Obstacle avoidance; Robotic soccer.
I. INTRODUCTION
F
OR the correct performance of its tasks, some form of
guidance must be applied to the motion of a mobile
robot. Motion control relates to the operations that must be
performed, in order to obtain appropriate controls for the
robot to perform its necessary movements. These controls
are then used to obtain appropriate signals for the robot’s
actuators. The control techniques that are applicable to a
specific robotic system are largely dependent on the robot’s
type of locomotion and actuator configuration. Though the
most common tasks that require the robot to reach a specific
location, or follow a path (or trajectory) have been studied
intensively and are well documented in the literature (e.g.
[1],[2],[3]) for both holonomic and non-holonomic robots,
more specific tasks, such as those that involve the
interaction of a mobile robot and a moveable object, have
only been addressed, for the most part, in very particular
environments. One such environment where these tasks are
especially significant is the robotic soccer environment, in
which the ability of a soccer robot to interact efficiently
with the soccer ball is critical to the success of its team.
Several approaches to the control of non-holonomic
(differential-drive) robots in this environment were
developed by the ISocRob team (the IST/ISR robotic soccer
team), addressing tasks such as achieving a specific position
in the field, avoiding any obstacles along the way [4], and
dribbling the ball whilst taking into account the physical
restrictions inherent to that process [5]. However, most
Middle-Sized League robotic soccer teams, the ISocRob
team among them, now use omnidirectional robots, which
have greater mobility, and therefore allow for more efficient
solutions to tasks such as intercepting a freely rolling ball or
transporting the ball to a specific location. When applied to
holonomic robots, the previously developed motion control
techniques lose their efficiency, since they fail to make use
of the omnidirectional capabilities of the robots.
The objective of this work is then to obtain new motion
control methods, with which to achieve the tasks that an
omnidirectional robot may be faced with while making full
use of their holonomic capabilities. This includes not only
the basic tasks of moving the robot in an unrestricted
manner across its environment, but also the more specific
tasks that involve object interaction. This is achieved by
using well known results of control theory to describe
rigorous solutions to the most common tasks, the
performance of which can be readily analyzed, and
expanding existing works within the domain of robotics to
allow holonomic mobile robots to solve more particular
tasks that involve object interaction. The domain of
application of these control techniques include, but are not
limited to, the robotic soccer environment.
II. B ASIC MOTION CONTROL
The most common tasks that an omnidirectional mobile
robot is required to perform, that do not involve explicit
interaction with the any objects in the robot’s environment,
fall under the classification of either posture stabilization or
posture tracking problems. In the former, the robot is
required to reach and maintain a target posture in its
environment, and in the latter, the robot is required
continuously follow a reference trajectory.
It is well known that, in the particular case of
omnidirectional robots, the problem of controlling the
robot’s posture can be solved independently for its position
and its orientation [3],[1]. By doing so, the nonlinearities
2
present in the robot’s kinematic and dynamic models are
more easily tractable since they only relate to its position
components. The control for orientation is then relatively
straightforward through the normal tools for linear systems
analysis. The above problems can then be further divided
into those of point stabilization, point tracking, and
orientation control for both static and moving references.
Both the point stabilization and point tracking problems
may be solved, for omnidirectional robots, by static state
feedback linearizing control. The advantage of this method
over other possible approaches such as Lyapunov-based
methods (e.g. [3]), is that the behavior of the linearized
system can be modified at will and readily analyzed. The
main drawback to this method is that the kinematic and
dynamic models of the robot are assumed to be exact,
which may introduce systematic errors into the closed-loop
system. In the case of simple and well-known systems, such
as holonomic robots, this is admissible.
Consider the position kinematic model of the robot:
࢖̇ ௥ = ‫ߠ(ܤ‬௥)࢜
(1)
where ࢖࢘ = [‫ ்]ݕ ݔ‬is the position of the robot in the
inertial world frame, ߠ௥ is its orientation, ࢜ = [‫ݒ‬௫ ‫ݒ‬௬ ]் is
its velocity in a reference frame centered on the robot’s
chassis (hereafter referred to as robot frame), and ‫ߠ(ܤ‬௥)
acts as a rotation matrix between these two frames. The
goal of point stabilization is then to find a control law ࢜
such that, for a given goal position ࢖௚ , the closed loop
system is asymptotically stable, i.e. lim௧→ஶ ቀ࢖௥(‫ )ݐ‬−
࢖݃‫=ݐ‬0. It can be readily shown [1], that a feedback
linearizing control law for this model that solves the point
stabilization problem is:
࢜ = ‫ିܤ‬ଵ(ߠ௥)‫ ࢘࢖(ܣ‬− ࢖௚ )
(2)
where ‫ ܣ‬is a Hurwitz matrix that describes the location of
the linearized system’s poles.
For point tracking, the problem is also to guarantee the
asymptotic stability of the closed loop system, but in this
case the reference position ࢖௚ (‫ )ݐ‬is time-varying.
Considering also that the robot may also be controlled
through its acceleration ࢇ = ࢜̇ , a feedback linearizing
control law for the point tracking problem is:
ࢇ = ‫ିܤ‬ଵ(ߠ௥)൫−‫ߠ( ̇ܤ‬௥)࢜ + ࢖̈ ௚ − (߉ଵ + ߉ଶ)࢖̇ ௘ − ߉ଵ߉ଶ࢖௘൯
(3)
where ߉ଵ = ߣଵ‫(ܫ‬ଶ×ଶ) and ߉ଶ = ߣଶ‫(ܫ‬ଶ×ଶ) , with ߣଵ,ଶ > 0 ,
define the location of the closed loop system’s poles. Again,
the proof for this result may be found in [1].
All that is left to solve the posture stabilization and
posture tracking problems is then to find appropriate
orientation control laws for each case, which is trivial
through the normal tools for linear systems analysis.
When implementing the control laws described above,
particular concern must be given to the effects of
discretization, particularly in cases where the frequency at
which these control algorithms run is too small to allow
finite-difference approximations of (2) and ࢇ =
‫ିܤ‬ଵ(ߠ௥)൫−‫ߠ( ̇ܤ‬௥)࢜ + ࢖̈ ௚ − (߉ଵ + ߉ଶ)࢖̇ ௘ − ߉ଵ߉ଶ࢖௘൯ (3).
In these cases, it is also possible to design control laws for
point stabilization and tracking directly in discrete-time,
which may provide better results (in [6], this process is
described, along with discrete-time control laws for
orientation). This is important in the particular case of the
ISocRob robots, since the task-execution architecture of
these robots does not support hard real time constraints.
III. MOVING OBJECT INTERCEPTION
Consider now the set of problems where a robot must
intercept a moving object, in such a way that the interaction
between the robot and the object when contact occurs must
respect some dynamic constraints. The interception task is
then to match the position and velocity (and acceleration, if
applicable) of the target object, in the shortest possible time.
In the robotic soccer environment, such a problem of
intercepting a moving object occurs whenever a robot is
required to gain possession of the soccer ball, whether it is
freely rolling on the field of play, or being controlled by a
robot of the opposite team. Without loss of generality, this
section will hereafter address the task of intercepting a
freely rolling soccer ball.
The most common approach to ball interception in the
robotic soccer environment is through the application of
neural networks and learning algorithms [7],[8], [9]. These
techniques require a reduction in the dimensionality of the
ball interception problem, and a large number of training
episodes, some of which have to be performed in the
physical robotic system (as opposed to simulation
procedures), to achieve acceptable ball interception under
most conditions. The main advantage of these techniques is
that the overall system does not require modelling, and so
the complexity of the problem is reduced. However, the
resulting behavior of the system, after the training process,
is hard to analyze. Also, for long-term robotic projects that
are subject to frequent changes, this implies that the training
process would often have to be repeated.
Other solutions rely on techniques that predict the
optimal interception point, [10], that do not react well to
unexpected changes in the ball’s motion due to collision.
The proposed solution to this problem is based on the
work done by Mehrandezh et al. in [11],[12], , which was
applied in the context of fast-moving object interception for
industrial manipulators, and relies on a composition of
trajectory-tracking and proportional navigation techniques
to achieve near-optimal interception. This particular
implementation also makes use of obstacle avoidance to
ensure that the robot does not collide with the ball before it
is prepared to capture it.
A. Obtaining the Desired Interception Trajectory
For a successful interception of the ball, a set of
restrictions imposed by the physical dimensions of both the
robot and the ball must be verified at the moment of
interception. The trajectory ࢖௥(‫ )ݐ‬described by the robot
during the process of interception must then take these
restrictions into account. Let ࢖௕ represent the position of
the ball in the world frame. Given the particular
configuration of Middle-Sized League soccer robots, an
3
example of such trajectories are those in which the robot is
positioned along the ball’s immediate direction of motion
motio at
the moment of interception (in
in order to facilitate
fac
ball
capture, since its linear momentum
omentum is opposed by the robot)
and the velocity and acceleration of the robot at this point is
such that the robot is able to maintain the ball under
possession after it contacts the ball. An example of such a
trajectory is depicted in Figure 1,, along which three points
of interest are defined as S, I and F. These delimit segments
of the robot’s trajectory that have distinct motion control
requirements:
(S-I): The robot must intercept point I along the ball’s
path, which is assumed to be linear in that interval, in the
shortest possible time. It is also convenient (although
(
not
strictly necessary) to impose that the robot should
shou match the
ball’s velocity and acceleration at this point. The
T robot
should then satisfy, at some instant ‫ݐ‬ூ:
 ‖࢖௥(‫ݐ‬ூ) − ࢖࢈(‫ݐ‬ூ)‖ = ݀ூ, where ݀ூ is the distance
between the center
er of the ball and point I;
I
 ‖࢖̇ ௥(‫ݐ‬ூ) − ࢖̇ ࢈(‫ݐ‬ூ)‖ ≅ 0;
 ‖࢖̈ ௥(‫ݐ‬ூ) − ࢖̈ ࢈(‫ݐ‬ூ)‖ ≅ 0.
Since the robot is omnidirectional and the control of its
orientation can be accomplished independently
ndependently from its
position, the problem of orientation control can then be
solved in a straightforward manner using linear systems
analysis (refer to [6]) to keep the robot oriented towards de
ball throughout its motion. Successful interception then
reduces to a problem of matching a desired trajectory for
position, ࢖ூ(‫)ݐ‬, defined by the motion of point
p
I. From the
above, this trajectory, designated as interception trajectory,
trajectory
can be related to ࢖௕(‫ )ݐ‬as:
( ), ‫ ̇ݔ‬௕(‫))ݐ‬
cos (atan2(‫ ̇̇ݕ‬௕(‫ݐ‬
࢖ூ(‫࢖ = )ݐ‬௕(‫ )ݐ‬+ ݀ூ ൤
൨
( ), ‫ ̇ݔ‬௕ (‫))ݐ‬
sin (atan2(‫ ̇̇ݕ‬௕(‫ݐ‬
(4)
The distinction should be made clear between the
trajectory described by the robot (which is a result of the
type of control used to achieve interception), and the soso
defined interception trajectory, which may act as a
reference for trajectory tracking lawss such as (3), and can
be thought of as a simple “moving” reference position,
position I,
uniquely defined by the position, velocity and acceleration
of the ball. This information
ation is assumed to be provided by
other components of the mobile robot.
Figure 1: The trajectory described by the robot during
ball interception. The various types of motion control
are shown: Point Tracking (PT), IPNG and constant
deceleration (Braking), as well as the switching instants
࢚ࡿ૚, ࢚ࡿ૛ and ࢚ࡿ૜.
Figure 2: Interception geometry through IPNG
IPNG.
(I-F): The robot then approaches point F in such a way
that it is able to maintain the ball under possession after the
ensuing collision. If point I is sufficiently close to the ball,
this can be achieved,, for example, through constant
negative acceleration.
Since the control if the robot is then trivial during the
final stage (I-F)
F) of interception, the subsequent sections
address the motion control techniques used during the first
stage, (S-I).
B. Combining Trajectory Tracking and Ideal Proportional
Navigation Guidance
The task of matching and
nd following a given moving
reference position ‫݌‬ூ(‫ )ݐ‬could be solved by the application
of the feedback linearizing laws discussed in Chapter II,
since it is essentially a point--tracking problem. However,
these laws are sub-optimal
optimal with respect to the required time
for interception, especially if the targ
target experiences sudden
changes in its motion. In these cases, the application of
guidance laws proves convenient [13],[14]. These laws are
typically basedd on the premise that a pursuer and its target
are on a collision course if the relative angle between them
does not change, and the distance between them is
decreasing. One such guidance law, Ideal Proportional
Navigation Guidance (IPNG) is here used to aid the
interception process. It can be shown [11], that by applying
IPNG throughout most of the motion of the robot, the
overall required time for interception is reduced when
compared
mpared to a pure trajectory
trajectory-tracking solution. However,
IPNG is not capable of matching the desired interception
trajectory, since its application only ensures that the robot
will eventually reach point I, but has no means to slow
down the robot, in order to satisfy the velocity and
acceleration requirements at that point. Also, the definition
of guidance laws such as IPNG usually assumes that the
speed of the pursuer (in this case the robot) is greater than
that of the target (the ball). Although this does not prevent
the system from converging towards the target if this
condition is not met, common trajectory tracking methods
generate better results under these conditions. Both of these
approaches, IPNG and trajectory tracking, must then be
combined in order
er to reach point I as fast as possible. The
control of the robot will then alternate between these two
4
methods: at the initial moments of its motion, and when the
robot is in the vicinity of point I, control is performed
through trajectory tracking; otherwise, it is performed by
IPNG. In the following sections, the operation of both of
these methods is described. The process of selecting the
appropriate controls in order to optimize the interception
time is described in Section E.
C. Ideal Proportional Navigation Guidance
Ideal Proportional Navigation Guidance (IPNG) was
introduced in [15], and also used in robotic interception in
[12]. IPNG is superior over other existing guidance laws in
terms of robustness to initial conditions and required time
for interception [13]. For omnidirectional robots, the
resulting control acts solely on position, and no restrictions
on orientation are imposed.
Consider the interception geometry presented in Figure
2. In the frame of reference defined by {݁௥, ݁ఏ , ݁௞ } centered
on point I, the acceleration command obtained through
IPNG is, by definition:
ࢇூ௉ே ீ = ܰ ࢘̇ × ࣂ̇ ௅ைௌ
(5)
where ܰ is the Navigation Constant, ࢘ denotes the
displacement between pursuer and target and ߠ௅ைௌ is the
angle between the world frame and a reference line that
unites the robot and the target, the Line-of-Sight (LOS), so
that ࣂ̇ ௅ைௌ = ߠ̇௅ைௌ .It can be shown that with ܰ > 2
interception is always achieved [15].
As seen by equation (5), IPNG generates an acceleration
which is orthogonal to the relative velocity between the
target and pursuer, which implies that the norm of the latter
will be preserved. In LOS-referenced coordinates, this
means that there is an acceleration component along the
normal to the LOS which tries to nullify the LOS rate, ࣂ̇ ௅ைௌ,
and a component along the direction tangent to the LOS,
which acts to keep the norm of the relative velocity
constant. One important property that stems from this fact is
that as the LOS rate approaches zero, the closing velocity
approaches constant values. This also means that, in these
conditions, the robot’s velocity becomes constant, and may
then fail to make use of the capabilities of the robot’s
actuators, which consequently would increase the required
time for interception. To overcome this limitation, to the
control signals generated by (5), an acceleration “boost” is
added to ensure that the robot’s actuators are always
operating at their maximum achievable accelerations [6].
D. Matching the Interception Trajectory through Point
Tracking
Tracking a moving reference position may be achieved
by direct application of control law (3). However, it is
important to assure that, when the robot is under the control
of the point-tracking primitives, no overshoot occurs due to
poor selection of the controller’s poles, and that the limits
of the robot’s actuators are respected. Let ࢖௘(‫ )ݐ(࢘࢖ = )ݐ‬−
࢖௚ (‫)ݐ‬. It is shown in [12] that, for no overshoot to occur,
one must have, for each component ݁(‫ )ݐ‬of ࢖௘(‫)ݐ‬:
⎧ߣ ≥ − ݁̇ (0)
⎪
݁(0)
݁̇ (0)
⎨
⎪ߣ ≤ − ݁(0)
⎩
݂݅ ݁(0) > 0
݂݅ ݁(0) < 0
To achieve this, consider the application of
࢖௚ = ࢖ூ and ߣଵ = ߣଶ = ߣ:
(6)
(3) with
ࢇࡼࢀ = ‫ିܤ‬ଵ(ߠ௥)൫−‫ߠ( ̇ܤ‬௥)࢜ + ࢖̈ ூ − 2߉(࢖̇ ௥ − ࢖̇ ூ)
(7)
−߉ଶ(࢖௥ − ࢖ூ)൯
It can be easily verified, through a dynamic model of the
omnidirectional robot [1], if this control law produces
values outside of the range of the robot’s actuators. If so,
then a value for ߣ is numerically sought in the intervals
defined by (6), such that the resulting signal ࢇࡼࢀ is
achievable by the robot. In the event that it is impossible to
find a proper value for ߣ, then overshoot is inevitable. This
may happen if, for instance, the control switches over from
IPNG to point-tracking too close to the goal point, and the
robot isn’t able to brake sufficiently fast. These situations
must be avoided by properly determining the switching
instants.
E. Selection of the Proper Control Signal
Having described the application of IPNG and point
tracking to the ball interception problem, it is important to
analyze how these techniques can be used together in the
most efficient manner, i.e., in order to minimize the total
required time for interception. The total time is taken as the
sum of the time during which the robot is under the control
of IPNG, ‫ݐ‬ூ௉ே ீ , point-tracking, ‫ݐ‬௉், and braking, ‫ݐ‬஻ :
‫ݐ‬௧௢௧௔௟ = ‫ݐ‬ூ௉ே ீ + ‫ݐ‬௉் + ‫ݐ‬஻
(8)
As it was seen previously, the main premise behind the
usage of IPNG is that it is faster in normal conditions than
the point-tracking primitives, and should therefore be used
as long as possible in the (S-I) segment of the interception
process. As with most proportional navigation laws,
however, IPNG is not efficient when the speed of the robot
is lower than that of the ball, and so control should also be
assigned to point-tracking in this case. Once the robot
reaches point I, by verifying the conditions presented in
Section A, the robot then brakes until it comes to a full stop,
with constant negative acceleration. An example of the
different types of motion control applied during interception
is shown in Figure 1, where the instants where the control is
switched are denoted as ‫ݐ‬ௌଵ, ‫ݐ‬ௌଶ and ‫ݐ‬ௌଷ. Of these, ‫ݐ‬ௌଶ has
the greatest effect on the total interception time, since ‫ݐ‬ௌଵ
and ‫ݐ‬ௌଷ cannot be freely selected. A minimum of ‫ݐ‬௧௢௧௔௟ is
then achieved by selecting ‫ݐ‬ௌଶ as late as possible, without
inducing the point tracking controller into an overshoot
situation, since overshoot would necessarily increase ‫ݐ‬௉் ,
and consequently ‫ݐ‬௧௢௧௔௟ . In [12], the optimal switching
instant for ‫ݐ‬ௌଶ was determined by estimating the required
time to interception, which requires finding the solutions to
the differential equations that define the closed-loop system
5
under the application of the point-tracking
tracking control law in
Figure 3:: Representation of the dribbling process
for an omnidirectional robot.
real-time. In the current approach,
oach, it was instead opted to
directly check for the existence of overshoot,
overshoot since this
already follows from the selection of the point tracking
controller’s poles. To this end, the conditions defined by (6)
are tested for the position error ࢖௘ at each iteration.
IV. OBJECT TRANSPORT
In some situations, a mobile robot may be required to
displace a moveable object in its environment to a given
location. The
he robot achieves this by pushing the object
while it is in contact with some part of its chassis. In robotic
soccer, an example of such a task is that of dribbling the
ball across the field of play. The methods through which
this dribbling process may be performed are dependent on
the mobile robot’s type of locomotion. For an
omnidirectional robotic soccer player, the desired behavior
is such that the robot is able to turn “around” the ball whilst
the ball is moving, as depicted in Figure 3.
3 In this way, the
robot is able to maintain the ball under possession
regardless of the required motion (assuming that the
velocity of the ball is under a certain limit).
Many approaches to dribbling in robotic soccer are
based in reinforcement learning and neural network
techniques (e.g.[16] and references contained there). These
approaches are difficult to adapt to changes in the physical
dimensions of the robot or its environment that are relevant
to the dribbling process.
A more analytical approach to dribbling for holonomic
robots was taken in [17], but requires the predetermination
predet
of a path that would not infringe the physical restrictions
restric
of
the dribbling process, which is usually undesirable in fastfast
changing environments such as in robotic soccer.
The proposed solution utilizes the concepts of
Interface-Control, introduced in [18], which were also
applied to the transport of free-flying
flying objects in [19]. An
“object controller” continuously provides the external force
that should be applied on the ball for it to achieve a given
reference. No assumptions are made on what type of
reference, be it a trajectory or a position, are required in this
step; only that the object controller, when applied to a
dynamicc model of the object, is able to provide an
appropriate force for each case. The forces that this
controller outputs then serve as reference for the robot, and
through its own controller it determines the appropriate
inputs for its actuators, in order to apply the reference force
to the ball, thereby displacing it. This means that the object
controller acts only indirectly upon the dynamic model of
the object, and the desired forces that it provides serve as an
“interface” to the robot’s controller. To th
this end, the robot’s
controller must be capable of following independent force
and position references. A control technique particularly
suited to these specifications is Hybrid force/position
control [20],[1],, which makes use of the physical
constraints placed upon the robot to reduce the dimension
of the robot’s state, simplifying the control problem.
A. Object Controller – PD Control
The highly dynamic characteristics of robotic soccer
usually imply that global motion planning, such as the
predetermination of desired trajectories or paths for the
motion of the ball, is inefficient. As such, the most common
solution is to require the ball
ll to be driven to a specific
position of the field, without necessarily having to stabilize
the ball around that reference. The stabilization problem,
however, is straightforward, and may be easily extended to
the normal requirements of a robotic soccer m
match.
Consider the following dynamic model of the ball, in which
the state,, given by the position of the ball, ࢖௕ = [‫ݔ‬஻ ‫ݕ‬஻ ]் ,
and its velocity, ࢜௕ = [‫ݒ‬஻ ௫ ‫ݒ‬஻ ௬ ]் , is assumed to be fully
accessible:
‫ ̇ݔ‬஻
0
⎡ ‫⎤ ̇ݕ‬
⎢ ஻ ⎥ = ቎0
‫̇ݒ‬
0
⎢ ஻ ௫⎥
‫̇ݒ‬
0
஻
⎣ ௬⎦
0
0
0
0
1
0
0
0
0 ‫ݔ‬஻
0
‫ݕ‬
1 0
1቏൦ ஻ ൪
൪+
቎
0 ‫ݒ‬஻ ௫
݉஻ 1
‫ݒ‬
஻௬
0
0
1
࢖௕ = ቂ
0
0
1
0
0
0
0቏ቆ൤݂௫൨+ ቈ݂௥௫቉ቇ
݂௬
݂௥௬
0
1
‫ݔ‬஻
‫ݕ‬
஻
0
ቃ൦‫ݒ‬஻ ൪
௫
0
‫ݒ‬஻ ௬
(9)
where ݉ ஻ is the mass of the ball, [݂௫ ݂௬ ]் is the applied
்
external force (the input to the system) and ൣ݂௥௫ ݂௥௬ ൧ is
the friction between the ball and the field of play
play. It can be
readily shown that the system is controllable (although that
is an intuitive result). The problem is then to obtain a
control law of the form:
ࢌ = −‫ܭ‬
‫ ࢞ܭ‬− ࢌ࢘
(10)
Figure 4:: Geometric details of the dribbling process.
6
where ‫ ܭ‬is a gain matrix such that the closed loop system
described by has stable poles. By choosing ‫ ܭ‬as:
݇௣ ݉ ௕
‫=ܭ‬൤
0
0
݇௣ ݉ ௕
݇ௗ ݉ ௕
0
0
൨
݇ௗ ݉ ௕
(11)
The closed loop system can then be decoupled into
distinct position components, both of which under PD
control [6]. A well known result is that, by selecting
݇ௗ = 2ඥ ݇௣ , the system becomes critically damped, with
௞
poles at ‫ = ݏ‬−ඥ ݇௣ = − ೏ . These poles must be slow
ଶ
enough for the robot to be able to accompany the required
motion of the ball. At higher speeds, the ability of the robot
to exert forces onto the ball is reduced due to the saturation
limits of the robot’s actuators.
are normal to the constraint surfaces, and so represent the
directions along which the robot is unable to move:
‫ܧ‬ி ࢗ̇ = 0
(15)
Likewise, the first row of ‫ ܧ‬defines a direction that is
tangent to the constraint surfaces, along which motion is
admissible:
1
‫ܧ‬௉ ࢗ̇ =
ߠ̇
(16)
ߙߚ ௥
This implies that any motion made by the robot which is
consistent with its constraints can be specified solely in
terms of its orientation ߠ௥ , which is intuitive since the
distance to the ball may not change while dribbling. It can
then be easily shown [6] that an acceleration command that
allows the position of the robot around the ball to be
controlled in constraint-compliant manner is given by:
B. Robot Controller – Hybrid Position/Force Control
Given an instantaneous required force that must be
applied on the ball, supplied by the object controller, and
assuming that the robot has the ball under its possession, the
objective of the robot controller is to exert that force on the
ball while satisfying a set of physical restrictions that allow
the robot to dribble the ball continuously. Consider a
situation where an omnidirectional soccer robot is dribbling
the ball, as depicted in Figure 3. Let the posture of the
robot, in a frame centered on the ball, be represented by
ࢗ = [‫ߠ ݕ ݔ‬௥]் . In this case, the robot is subject to the
following restrictions.
‫ݔ‬ଶ + ‫ݕ‬ଶ = ܴ௙ଶ
atan2(−‫ݕ‬, −‫ߠ = )ݔ‬௥
(12)
(13)
where ܴ௙is the nominal distance between the ball and the
robot when they are both in contact, and ߠ௙ is the angle of
the required force vector in a reference frame centered on
the ball. Since the robot is only able to interact with the ball
through pushing motion (i.e. it can only apply forces to the
ball along the x-axis of the robot’s frame), this means that
the robot must align itself with the required force vector at
all times. However, in order to do so, the only admissible
displacements made by the robot are those that satisfy these
constraints continuously. As it is seen in [21], a reference
frame may be defined by the directions that are normal to
each of the constraint surfaces defined by (12) and (13), and
an auxiliary vector that is orthogonal to both of them. In
other words, an orthonormal basis for this frame, denoted
constraint frame, is given by the row vectors of the (3 × 3)
matrix ‫ܧ‬:
‫ݔ‬
ߙߚ 0 0 ିଵ −‫ݕ‬
‫ݔ‬
‫ݕ‬
‫ = ܧ‬൥ 0 ߙ 0൩ ൦ ௬
௫
− మ
0 0 ߚ
ோ మ
ோ
೑
೑
1
0൪= ቈ‫ܧ‬௉ (ଵ×ଷ) ቉
‫ܧ‬ி (ଶ×ଷ)
1
(14)
where ߙ and ߚ are chosen so that each row of ‫ ܧ‬has unitary
norm. Matrix ‫ ܧ‬transforms vectors from a frame centered
on the ball to the constraint frame. The last two rows of ‫ܧ‬
(1/ߙߚ)ߠ̈௥
ଶ
ଶ
⎛
⎞
ࢇ௉ = ܴିଵ ⎜‫ିܧ‬ଵ ൦− ‫ ̇ݔ‬+ ‫ ̇ݕ‬൪− ܴ̇ (ࢗ̇ ோ௥ − ࢗ̇ ோ௕ )⎟ + ࢗ̈ ோ௕
ߙ
0
⎝
⎠
(17)
where ࢗ̇ ோ௥ , ࢗ̇ ோ௕ and ࢗ̈ ோ௕ are, respectively, the velocity of the
robot, the velocity of the ball and the acceleration of the
ball, all of which in the robot’s frame, and ܴ is a rotation
matrix that transforms vectors from the robot frame to the
world frame. The acceleration of the ball in the robot’s
frame, ࢗ̈ ோ௕ , can be readily obtained through an estimate of
the friction acting on the ball (i.e. ࢌ࢘).
The problem of aligning the robot with the desired force
vector (i.e. so that ߠ௥ = ߠ௙ ) then reduces to that of
controlling ߠ௥ through ߠ̈௥, which is straightforward through
the normal tools of linear systems analysis.
In a similar manner, an acceleration control that allows
the robot to apply specified forces directed towards the
center of the ball is given by:
ࢇி =
1 ିଵ ் −‫ܨ‬
ܴ ‫ܧ‬ி ቂ ቃ
0
݉௕
(18)
where ‫ ܨ‬is the magnitude of the required force vector ࢌ.
The resulting force that the robot exerts on the ball through
the application of ࢇி is then independent of the robot’s
orientation. In order for the robot to effectively apply the
required force ࢌ, as returned by the object controller, on the
ball, this implies that the robot must already be aligned with
ߠ௙ prior to the application of ࢇி . This could have
unexpected results in the initial instants of the robot’s
motion, and so ࢇி is redefined as:
1 ିଵ ் −‫ܨ‬
ܴ ‫ܧ‬ி ቂ ቃ ݂݅ หߠ௥ − ߠ௙ห≤ ܶ௙
ࢇ'ி = ൝݉ ௕
0
0
‫ݐ݋‬ℎ݁‫݁ݏ݅ݓݎ‬
(19)
Where ܶ௙ > 0 is an appropriately selected threshold value.
While it was assumed that constraints (12) and (13) are
always valid as long as the robot does not perform any
7
movements that are inconsistent with
th these constraints, in
i
reality, some amount of error may be introduced in the
system, for example, due modelling errors. To account for
this fact, a recovery term ࢇௌ is added, of the form:
0
0
ࢇௌ = ܴିଵ‫ିܧ‬ଵ ൥ܽ௥ௌ൩
൩+ ൥ 0 ൩
ߙఏௌ
0
ఏ
(20)
The restrictions are then addressed individually: in order
to maintain restriction (12) valid, the term
ܴିଵ‫ିܧ‬ଵ[0 ܽ௥ௌ 0]் is included, which drives the robot
towards the ball; to satisfy (13),, a simple torque
compensation [0 0 ߙఏௌ]் is included which keeps the
robot turned to the ball. The values for ܽ௥ௌ and ߙఏௌ are
obtained
ned through any applicable control laws (these are yet
another instance of the double-integrator
integrator problem). The
resulting control is then:
ࢇ஼ = ࢇ௉ + ࢇி + ࢇௌ
(21)
The main limitation to the application of this method
comes from the nonlinearities introduced into the system by
the robot’s actuators, so that some values of ࢇ஼ may be
outside of their capacities, particularly at higher speeds.
From (17) it can be seen that, even if the robot is not
required to move around the ball (i.e. ߠ̇௥ = 0), nor apply
any forces to it, there is a non-null
null acceleration involved in
maintaining the dribbling restrictions
ions valid. At any moment
in which the robot’s actuators are unable to account for this
acceleration alone, those restrictions will be infringed, and
the ball may be lost. To account for this fact, to the
accelerations that are required in this sense, one must assign
as much of the available control effort of the robot’s
actuators (in the sense that the actuators have a limited
capacity for acceleration) as necessary. The accelerations
that the robot must perform in order to apply the required
force ࢌ on the
he ball may then be scaled down according to
the remaining control effort, resulting in slower response,
but keeping the dribbling restrictions valid nonetheless.
Also note that the actuators of the robot are limited not only
on acceleration but also on speed.
eed. This implies that the
velocity of the ball must be kept under a certain limit
(which is determined experimentally) for the robot to be
able to dribble efficiently.
Figure 5:: Relevant geometric details for obstacle
avoidance.
By taking this approach, obstacle avoidance may be
achieved in any of the tasks considered above, and the
respective control inputs to the robot are modified only in
what is strictly
tly necessary to achieve safety. T
Thus, the
interference caused to the different control strategies used
throughout navigation is kept to a minimum.
This obstacle avoidance algorithm is also com
completely
reactive, and although it is not complete (in the sense that
there may be some configurations of obstacles that prevent
the robot from reaching its destination) these occurrences
are unlikely in an environment sparsely populated with
obstacles, such as in the case of robotic soccer. In these
environments,
ronments, the paths taken by the robot approach the
globally shortest paths around the obstacles.
A. Avoiding Static Obstacles
Consider the situation depicted in Figure 5. The robot
heads towards its goal ܶ with velocity ࢜, which is assumed
constant between control cycles
cles. An obstacle is detected at
distance ݀ை௕௦, and at a certain angle ߙை௕௦ relative to the
robot’s velocity vector. In the particular case of tthe
ISocRob robots, these obstacles are optically detected, and
no information is given about its shape. A common solution
in these situations is to model these obstacles as circles in
configuration space, the radius of which must be large
enough to accommodate for the dimensions of the robot and
the expected dimensions of the obstacle.
Let ݀ௌ௔௙௘ represent the radius of each obstacle in
configuration space.. In these conditions, the minimum
distance ݀௠ ௜௡ between a line aligned with the velocity
vector and the centre of the obstacle (i.e. the minimum
distance that the robot would observe to that obstacle if it
continued to move along the current direction) is given by:
V. OBSTACLE AVOIDANCE
For any of the previously presented motion control
solutions to be applicable in the robotic soccer environment,
some form of obstacle avoidance is necessary for each of
them.
In the proposed solution,, at each step, an angular
deviation is applied to the current (linear)
linear) velocity of the
robot, so that the direction followed by the robot is tangent
to the boundaries of any obstacle that is currently blocking
the robot’s path. This is similar to the TangentBug
algorithm [22], and to the implementation described in [23].
݀௠ ௜௡ = ݀ை௕௦sin (ߙை௕௦)
(22)
Consequently, if ݀௠ ௜௡ > ݀ௌ௔௙௘, and ݀ை௕௦ is smaller than the
distance to the robot’s goal, the robot will not collide with
the obstacle if its velocity remains constant. This constitutes
a safety condition that must be verified for each detected
obstacle. The angle ߚ that satisfies ݀௠ ௜௡ ≥ ݀ௌ௔௙௘ is then:
݀
⎧asin ቆ ௌ௔௙௘ቇ ݂݅ ݀ௌ௔௙௘ ≤ ݀ை௕௦
݀ை௕௦
ߚ=
ߨ
⎨
‫ݐ݋‬ℎ݁‫݁ݏ݅ݓݎ‬
⎩
2
(23)
8
Figure 7:: Avoiding a moving obstacle.
Figure 6: Endpoint calculation for an obstacle cluster.
The two possible detour angles for each obstacle ࡻ ࢏ are
shown as ࢽ࢏૚,૛. Note that for every obstacle except ࡻ ૚
there is only one valid solution, which is either a positive
(red) or negative (blue) detour.
deto
గ
The definition of ߚ = happens whenever the robot is
ଶ
already inside the radius of the obstacle in configuration
space, which may happen in the dimensions of the obstacle
are overestimated. By following this direction, the robot
will eventually reach the boundaries of the obstacle in
configuration space.
Therefore, whenever a collision is bound to happen, the
robot must apply a certain detour angle ߛ to its velocity so
that the safety condition is satisfied. The minimum detour
angle is such that:
|ߙை௕௦ + ߛ| = ߚ
(24)
Let ܲଵ,௞ and ܲଶ,௞ represent the points at which the
directions taken through the application of the
aforementioned detour angles are tangent to obstacle ݇,
designated the endpoints of that obstacle,, the determination
of which is trivial from the values of ߛ for each obstacle.
Also, let ‫ܮ‬௦௣ (ܲ, ܶ) represent the expected distance from that
point to the goal point. In a single-obstacle
obstacle case, selecting
the direction for motion that leads the robot to the endpoint
of that obstacle with the shortest
hortest expected distance to target
is equivalent to finding the minimal value of ߛ. The path
taken by the robot under these circumstances would be
equivalent to the globally shortest path. In situations with
w
multiple obstacles, however, this may not be the case, and
the robot may even be lead into oscillatory motion with
some configurations of obstacles that present multiple
equivalent values of ߛ and ‫ܮ‬௦௣ (ܲ, ܶ). To account
acco
for these
situations, the value of ‫ܮ‬௦௣ (ܲ, ܶ) is weighted with the
detour angle ߛ. For every obstacle ܱ௞ , the cost of one of its
endpoints ܲ௜,௞ is then defined as:
‫ܥ‬൫ܲ௜,௞ ൯= ܿଵ
‫ܮ‬௦௣ ൫ܲ௜,௞, ܶ൯
൯
max {‫ܮ‬௦௣ ൫ܲଵ,௞, ܶ൯, ‫ܮ‬௦௣ ൫ܲଶ,,௞ , ܶ൯}
+ ܿଶ
|ߛ௜|
ߨ
(25)
with ܿଵ, ܿଶ > 0 and ܿଵ + ܿଶ = 1. ‫ܥ‬൫ܲ௜,௞൯ returns values in
the [0,1] interval, and represents the robot’s “preference”
for the paths that demand less turning effort.
In the case that multiple detected obstacles exist in the
environment, situations may arise where obstacles are close
enough so that they are effectively “merged” in
configuration space. This occurs whenever two obstacles
are separated by a distance inferi
inferior to 2݀ௌ௔௙௘ . These
obstacles are, however, still considered as two separate
obstacles by the robot. The general procedure to detect the
endpoints for an obstacle or a cluster of obstacles is
represented in Figure 6.. The following steps are performed
in order to determine the locally optimum direction for
movement:
1) Identify the original obstacle that violates the safety
condition for the current velocity;
2) The two possible detour angles for that obstacle are
calculated. This always results in one positive solution
and one negative solution
solution. These solutions are
registered in set ߁ = [ߛଵ ߛଶ] . The respective
endpoints are calculated.
3) For the endpoint that minimizes ‫ܥ‬൫ܲ௜,௞ ൯, the safety
condition is re-checked:
 If the solution is safe, update the robot’s velocity
and return;
 Otherwise, the robot is in the presence of an
obstacle cluster. Select the individual obstacle that
currently violates the safety condition and
continue to step 4;
4) Using equation (24) once again, two solutions are
generated. One of these solutions is invalid, since it
would necessarily violate the safety condition for the
previously selected obstacle. Update ߁ by overwriting
the element with the same algebraic sign as the detour
angle of the valid solution. Update the respective
endpoint and re-evaluate
evaluate the minimal cost solution.
 If the optimal endpoint implies |ߛ௜| > ߨ, the goal
is unreachable. Return.
 Otherwise, return
eturn to step 3.
B. Avoiding Moving Obstacles
All considerations for the static obstacles case were
made taking into account that the instantaneous velocity of
the robot needed to be altered by a certain detour angle in
order to avoid collisions. These concepts are easily
adaptable for an environment with moving obstacles, by
addressing instead the relative velocity between these
obstacles and the robot. If the obstacles possess a certain
9
velocity ‫ݒ‬ை = [‫ݒ‬ை ௫ ‫ݒ‬ை ௬ ]் , it is necessary to obtain the
detour angle that must be applied to the relative velocity, so
that the robot avoids any incoming obstacles in its frame. In
these conditions, let ߦ represent the angle of the relative
velocity vector, in a situation where a collision with a
moving obstacle is imminent, and ߙோ the angle of the
robot’s velocity.
ity. The problem is then to find an angle ߙோᇱ for
the velocity of the robot such that the resulting relative
velocity (with angle ߦᇱ) is safe according to the conditions
presented in Section A. It can be shown [6],
[ that this angle
is given by one of two possible solutions:
− tan(ߦᇱ) ‫ݒ‬ை ௫ + ‫ݒ‬ை ௬
ߙோᇱ = ± asin ቆ
ቇ + ߦᇱ
ܸோ ඥ1 + tanଶ(ߦᇱ)
(26)
where ܸோ is the speed of the robot. The validity of each
solution can be readily checked by testing the safety of the
resulting relative velocity vector. Note that in the moving
obstacles case, the expected path length from the moving
obstacle’s endpointss to the robot’s goal is unpredictable,
and may lead to wrong results, so the optimality criterion
becomes the necessary deviation from the original direction
at each control cycle. The extension of this method for
multiple moving obstacles is then trivial,
trivial since it follows
the same concepts as in a situation with static obstacles,
minimizing the required detour angle instead of the cost
‫ܥ‬൫ܲ௜,௞൯of the obstacle’s endpoints.
VI. RESULTS
Experiments were performed for each of the main tasks
considered in this work. These include the tasks of posture
stabilization and tracking, moving object interception
interceptio and
object transport,, while considering the problem of obstacle
avoidance for each case. Unfortunately, due to hardware
limitations, it was only possible to test
te the posture
stabilization and static obstacle avoidance on the real
ISocRob robots. The remaining tasks were tested inside the
Webots simulation environment, using a realistic model of
these same robots. Here, the most important results are
presented (refer to [6] for details).
For the specific task of stabilizing the robot around a
reference posture while avoiding obstacles, the robot was
placed inside a U-shaped
shaped formation of obstacles, which
constitutes a typical “local minimum” situation.
Figure 9:: Intercepting a freely rolling ball that changes
direction upon colliding with an obstacle.
The robot was set, initially stopped, at an initial posture
with ‫ݔ‬଴ = −2.5 m and ‫ݕ‬଴ = 0 m in the world frame, and
required to stabilize itself around the origin of the same
frame. The results, represented in Figure 8, show that both
in the real and simulated experiments, the robot is able to
successfully detect the endpoints of this cluster of obstacles,
according to the algorithm described in Section V, and is
thus able to escape the local minimum situation and
stabilize itself around its goal, not taking into account any
self-localization errors.
To test the moving object interception algorithm, the
robot was tasked with intercepting a freely rolling ball,
which eventually collides with an ob
obstacle (already during
the interception process). The initial velocity of the ball was
such that ‫ݒ‬௕ ௫ = ‫ݒ‬௕௬ = 0.7 m//s in the world frame, and its
acceleration, due to friction, was ܽ஻ ௫ = ܽ஻ ௬ =
−0.025 m/s ଶ in that frame. The robot was initially
stopped. The interception process, depicted in Figure 9,
shows that the robot is able to achieve faster interceptio
interception
when using the combined IPNG and trajectory tracking
approach (the required time for interception was ‫ݐ‬௧௢௧௔௟ =
6.87 s in this case) than when using a pure trajectory
tracking-based solution (with a required time of ‫ݐ‬௧௢௧௔௟ =
10.3 s ), and that it deals efficiently with unexpected
variations in the trajectory of the target.
Finally, to test the task of transporting a soccer ball
across the field of play while avoiding obstacles, the robot
was initially given possession of the ball, which was at a
position with ‫ݔ‬଴ = −4 m , ‫ݕ‬଴ = 0 m with respect to the
world frame, and required to reach the origin (positions ࢖௜
and ࢖௙ in Figure 10.
2
Robot
Ball
1
X: 0.006297
Y: 0.009989
pf
pi
Y /m
0
-1
-2
Figure 8:: Escaping a typical local minimum situation.
The black filled circles represent the static obstacles in
the environment, and the green circles around them
represent the safety distance
istance that the robot must keep.
-3
-5
-4
-3
-2
X /m
-1
0
1
Figure 10:: Transporting the ball while avoiding
obstacles.
10
The robot had an initial orientation of ߠ଴ = ߨ. In the
robot’s environment, a formation of obstacles was placed,
such that the robot is sufficiently small to pass through the
formation by itself (the safety distance in this case is
represented in Figure 10 as a green circle around each
obstacle), but must drive around the formation when
handling the ball, since the safety distance in this case is
greater (represented by a red circle around each obstacle).
The results presented in Figure 10 for this situation show
that the robot is able to avoid the formation of obstacles by
resorting to the proposed obstacle avoidance solution
(which is in this case applied to the controls provided by the
object controller), and drive the ball to its intended position.
VII. CONCLUSIONS
The control algorithms that were presented in this work
address the most common motion control problems for
holonomic robots, namely the problems of posture
stabilization, posture tracking, moving object interception
and transport of moveable objects. The problem of obstacle
avoidance was also considered.
For the problems of posture stabilization and tracking,
feedback linearization was used to obtain control laws that
may be readily analyzed with respect to their performance.
A solution to the problem of moving object interception
by a holonomic mobile robot was presented, based on work
previously developed for robotic manipulators, which
combines IPNG with trajectory tracking. Tests were
performed in the Webots simulation environment on models
of the ISocRob robots, where this technique was shown to
allow the successful interception of a freely rolling ball,
regardless of the initial conditions of the problem. This
solution was also shown to be also robust to unexpected
variations in the trajectory of the object.
For the problem of transporting an object through the
pushing action of a mobile robot, a solution based on an
Interface-Control scheme was applied that relies on the
application of a PD controller to provide the required
accelerations of the object in question, and uses these
accelerations as a reference for a Hybrid Position/Force
controller acting upon the robot. Through simulation, this
was verified to allow a soccer robot to dribble a ball to a
specified position in its environment.
An obstacle avoidance algorithm was presented, which
deviates the desired control inputs of the robot so that it
passes tangent, in the configuration space, to each detected
obstacle, which are modelled as circles. The solution was
tested both in the real ISocRob robots and in the Webots
environment, where it proved successful. The paths
described by the robot while using this method approach the
globally shortest paths for the most common configurations
of obstacles that are encountered during a robotic soccer
match.
REFERENCES
[1] C. C. de Wit, B. Siciliano, and G. Bastin, Theory of Robot Control.
Springer, 1996.
[2] R. Siegwart and I. R. Nourbakhsh, Introduction to Autonomous
Mobile Robots. MIT press, 2004.
[3] P. G. Plöger, G. Indiveri, and J. Paulus, "Motion Control of Swedish
Wheeled Mobile Robots in the Presence of Actuator Saturation," in
RoboCup 2006 Symposium, Proceedings, Bremen, Germany, 2006.
[4] C. F. Marques and P. U. Lima, "Multi-sensor Navigation for Soccer
Robots," in Proceedings of the RoboCup 2001 Symposium, Seattle,
USA, 2001.
[5] B. D. Damas, P. U. Lima, and L. M. Custódio, "A Modified Potential
Fields Method for Robot Navigation Applied to Dribbling in Robotic
Soccer," Lecture Notes in Computer Science, pp. 65-77, 2003.
[6] J. Messias, "Task Specific Motion Control of Omnidirectional
Robots," MsC Thesis, IST, Lisbon, 2008.
[7] P. Stone and M. Veloso, " A layered approach to learning client
behaviors in the robocup soccer server," Applied Artificial
Intelligence, vol. 12, pp. 165-188, 1998.
[8] H. Müller, et al., "Making a Robot Learn to Play Soccer Using
Reward and Punishment," Lecture Notes in Computer Science, vol.
4667/2007, pp. 220-234, 2007.
[9] T. Gabel and M. Riedmiller, "Learning a Partial Behavior for a
Competitive Robotic Soccer Agent," KI- Künstliche Intelligenz, vol.
20, no. 2, pp. 18-23, May 2006.
[10] F. Stolzenburg, O. Obst, and J. Murray, "Qualitative Velocity and
Ball Interception," in KI 2002: Advances in Artificial Intelligence,
Twentyfifth Annual German Conference, Aachen, Germany, 2002, p.
283–298.
[11] J. M. Borg, M. Mehrandezh, R. G. Fenton, and B. Benhabib, "An
Ideal Proportional Navigation Guidance System for Moving Object
Interception - Robotic Experiments," in IEEE International
Conference on Systems, Man, and Cybernetics, 2000, pp. 3247-3252.
[12] M. Mehrandezh, M. N. Sela, R. G. Fenton, and B. Benhabib,
"Robotic interception of moving objects using ideal proportional
navigation guidance technique," Robotics and Autonomous Systems,
vol. 28, pp. 295-310, 1999.
[13] C.-D. Yang and C.-C. Yang, "A unified approach to proportional
navigation," IEEE Transactions on Aerospace and Electronic
Systems, vol. 33, no. 2, pp. 557-567, Apr. 1997.
[14] U. S. Shukla and P. R. Mahapatra, "The proportional navigation
dilemma-pure or true?," IEEE Transactions on Aerospace and
Electronic Systems, vol. 26, no. 2, pp. 382-392, Mar. 1990.
[15] P.-J. Yuan and J.-S. Chern, "Ideal proportional navigation," NASA
STI/Recon Technical Report A, vol. 95, pp. 501-512, 1993.
[16] M. Riedmiller and A. Merke, "Using Machine Learning Techniques
in Complex Multi-Agent Domains," in Adaptivity and Learning.
Springer, 2003.
[17] X. Li, M. Wang, and A. Zell, "Dribbling Control of Omnidirectional
Soccer Robots," in Proc. of the 2007 IEEE International Conference
on Robotics and Automation, Rome, Italy, 2007, pp. 2623-2628.
[18] Y. Nakamura, K. Nagai, and T. Yoshikawa, "Mechanics of
coordinative manipulation by multiple robotic mechanisms," in
Proceedings of the 1987 IEEE International Conference on Robotics
and Automation, 1987, pp. 991-998.
[19] W. C. Dickson and R. H. ,. J. Cannon, " Experimental results of two
free-flying robots capturing and manipulating a free-flying object," in
Proceedings of the 1995 IEEE/RSJ International Conference on
Intelligent Robots and Systems. 'Human Robot Interaction and
Cooperative Robots' , Pittsburgh, PA, USA, 1995, pp. 51-58.
[20] T. Yoshikawa, "Dynamic hybrid position/force control of robot
manipulators--Description of hand constraints and calculation of joint
driving force," IEEE Journal of Robotics and Automation, vol. 3, no.
5, pp. 386-392, Oct. 1987.
[21] T. Yoshikawa, T. Sugie, and M. Tanaka, "Dynamic Hybrid
Position/Force Control of Robot Manipulators-Controller Design and
Experiment," in Proceedings of the 1987 IEEE International
Conference on Robotics and Automation, 1987, pp. 2005-2010.
[22] I. Kamon, E. Rivlin, and E. Rimon, "A new range-sensor based
globally convergent navigation algorithm for mobile robots," in
Proceedings of the 1996 IEEE International Conference on Robotics
and Automation, Minneapolis, Minnesota, USA, 1996, pp. 429-435.
[23] M. Bowling and M. Veloso, "Motion control in dynamic multi-robot
environments," in Proceedings of the 1999 IEEE International
Symposium on Computational Intelligence in Robotics and
Automation, Monterey, CA, USA, 1999, pp. 168-173.