Design and Implementation of a Leader-Follower Controller for a
Wheel-Legged Robot
Luc Xuan Tu Phung
Master of Engineering
Department of Mechanical Engineering
McGill University
Montreal, Quebec
August 2014
A thesis submitted to McGill University in partial fulfillment of the requirements of the
degree of Master of Engineering
© Luc Xuan Tu Phung, 2014
ACKNOWLEDGEMENTS
First of all, I would like express my deepest gratitude to my supervisor, Dr. Inna
Sharf, for her patience, invaluable advice and guidance throughout my studies as well as for
her financial support. The work presented in this thesis would not have been possible without
her help. I would also like to thank my labmates: Korhan Türker for introducing me to the
Micro-Hydraulic Toolkit project, and Christopher Wong for welcoming me to Medicine Hat
and for teaching me how to operate the MHT. I am also thankful to Bin Li for his assistance
in the completion of the project. Big thanks to Blake Beckman, Ron Anderson, Jared
Giesbrecht and Blaine Fairbrother from DRDC – Suffield Research Centre for their advice,
technical assistance and for providing me with the equipment needed to complete my
research. Special thanks to Jason O’Reilly from Amtech Aeronautical Inc. for his tremendous
help in the course of this work and for being a great friend during my stay in Medicine Hat. I
would also like to acknowledge DRDC, McGill University, Hydro-Quebec and the Natural
Sciences and Engineering Research Council of Canada for their financial aid. Lastly, but far
from least, I would like to thank my family for their encouragements and moral support
throughout my studies.
i
ABSTRACT
Leader-follower formation control of mobile robots has been investigated by many
researchers in the robotics community to control multi-robot systems. This control method
can coordinate a team of unmanned vehicles by manoeuvering each robot to maintain its
desired position with respect to the leader of the formation. The leader-follower control
strategy can also be applied to implement human-follower behaviours on mobile robots. In
this thesis, a leader-follower controller was developed for the Micro-Hydraulic Toolkit
(MHT), a quadruped wheel-legged robot designed by Defence Research and Development
Canada (DRDC) at the Suffield Research Centre. Previously, a velocity-based inverse
kinematics controller was designed and implemented on the vehicle to control its posture.
However, since the MHT does not have a steering mechanism for its wheels, this controller is
unable to execute turning manoeuvres with the robot. Therefore, a separate controller was
developed in order to steer the MHT to achieve leader-follower formation control. The
purpose of the leader-follower controller is to compute the desired wheel velocities of the
robot to reach and maintain a desired range and bearing with respect to a designated leader.
To implement the controller on the physical robot, a vision algorithm was developed to
measure the range and bearing of the leader with respect to the MHT using a monocular
camera. A wide range of leader-follower scenarios was executed in simulation using a high
fidelity physics-based model of the Toolkit and on the physical platform of the MHT to
assess the performance of the leader-follower controller. The results of these tests
demonstrated that the controller developed is successfully capable of executing leaderfollower behaviours with the MHT.
ii
ABBRÉGÉ
L’approche de meneur-suiveur pour contrôler un groupe de robots a été le sujet de
plusieurs recherches dans le domaine de la robotique. Cette approche consiste à guider un
groupe de robots en commandant à chaque robot de garder une position prédéterminée par
rapport au meneur de la formation. Elle peut également être appliquée pour permettre à des
robots de suivre une personne. Dans ce mémoire, un mécanisme de contrôle pour accomplir
une fonction de meneur-suiveur est développé. Le contrôleur a été conçu pour le MicroHydraulic Toolkit (MHT), un robot quadrupède à locomotion hybride construit par
Recherche et développement pour la défense Canada au Centre de recherches de Suffield.
Précédemment, une stratégie de contrôle utilisant le modèle cinématique inverse du robot a
été développée et testée sur le MHT afin de permettre au véhicule de reconfigurer sa posture.
Toutefois, ce contrôleur est incapable de faire tourner le robot puisque le MHT ne peut pas
rediriger ses roues. Alors, un mécanisme de contrôle distinct a été créé pour permettre au
véhicule de suivre un meneur. L’objectif du contrôleur est de générer les commandes
cinématiques aux roues du MHT afin que le robot puisse maintenir une distance et un angle
prédéterminés par rapport au meneur. Pour implémenter le contrôleur sur le MHT, un
algorithme de vision par ordinateur a également été développé pour mesurer la distance et
l’angle du meneur par rapport au véhicule à l’aide d’une caméra. Le mécanisme de contrôle
présenté dans ce mémoire a été testé sous divers scénarios en simulation sur un modèle
détaillé du MHT ainsi que sur le prototype du robot. Les résultats des tests démontrent que le
MHT parvient à accomplir une fonction de meneur-suiveur avec succès à l’aide du contrôleur
conçu.
iii
Table of Contents
ACKNOWLEDGEMENTS ....................................................................................................... i
ABSTRACT .............................................................................................................................. ii
ABBRÉGÉ ............................................................................................................................... iii
Table of Contents ..................................................................................................................... iv
List of Tables .......................................................................................................................... vii
List of Figures ........................................................................................................................ viii
1
Introduction ....................................................................................................................... 1
1.1
2
Background ................................................................................................................ 1
1.1.1
Behaviour-Based Approach ................................................................................ 2
1.1.2
Virtual Structure Approach ................................................................................. 2
1.1.3
Leader-Follower Approach ................................................................................. 3
1.2
Problem Statement ..................................................................................................... 6
1.3
Thesis Structure .......................................................................................................... 8
Micro-Hydraulic Toolkit ................................................................................................... 9
2.1
MHT Structural Components ..................................................................................... 9
2.2
Control Systems ....................................................................................................... 12
2.2.1
Leg Control System .......................................................................................... 12
2.2.2
Vision System ................................................................................................... 13
2.2.3
Inertial Measurement Unit and Controller Board ............................................. 14
2.2.4
Power Supplies.................................................................................................. 15
2.3
Simulation Environment of the Micro-Hydraulic Toolkit ....................................... 15
2.4
Summary .................................................................................................................. 16
iv
3
4
Leader-Follower Controller ............................................................................................ 17
3.1
Controller Overview ................................................................................................. 18
3.2
Leader-Follower Controller...................................................................................... 21
3.2.1
Leader-Follower Controller Development ........................................................ 21
3.2.2
Leader-Follower Controller Adaptations .......................................................... 25
3.2.3
Leader-Follower Controller PID Tuning .......................................................... 25
3.3
Leader’s Trajectory Constraints ............................................................................... 26
3.4
State Machine ........................................................................................................... 30
3.5
Summary .................................................................................................................. 33
Vision Algorithm ............................................................................................................ 35
4.1
4.1.1
Fiducial-Based Tracking ................................................................................... 36
4.1.2
Model-Based Tracking...................................................................................... 37
4.1.3
Appearance-Based Tracking ............................................................................. 37
4.2
5
6
Review of Object Tracking Methods ....................................................................... 35
Vision Algorithm Development ............................................................................... 40
4.2.1
Leader Tracking Algorithm .............................................................................. 41
4.2.2
Position Estimation Algorithm ......................................................................... 45
4.3
Vision Algorithm Results ......................................................................................... 47
4.4
Summary .................................................................................................................. 52
Leader-Follower Manoeuvres ......................................................................................... 53
5.1
Turning Manoeuvres ................................................................................................ 54
5.2
Simulation of Leader’s Trajectory Constraints ........................................................ 57
5.3
Leader-Follower Manoeuvres on a Flat Terrain ...................................................... 62
5.3.1
Simulation Results ............................................................................................ 63
5.3.2
Experimental Results ........................................................................................ 70
5.4
Leader-Follower Manoeuvre on a Ramp ................................................................. 77
5.5
Summary .................................................................................................................. 81
Conclusions ..................................................................................................................... 83
v
6.1
Vision Algorithm...................................................................................................... 83
6.2
Leader-Follower Controller...................................................................................... 84
6.3
Future Work ............................................................................................................. 84
6.3.1
Model Validation .............................................................................................. 84
6.3.2
Turning Manoeuvres on Sloped Terrains ......................................................... 85
6.3.3
Vision System Improvements ........................................................................... 85
6.3.4
Leader-Follower Controller Improvement ........................................................ 85
6.3.5
High-Level Control Development .................................................................... 86
References ............................................................................................................................... 87
vi
List of Tables
Table 2.1: Dimensions and mass of the MHT's components .................................................. 11
Table 4.1: Vision algorithm range results ............................................................................... 48
Table 5.1: Turning manoeuvres results with 4V differential voltages.................................... 55
Table 5.2: Turning manoeuvres results with 10V differential voltages.................................. 56
Table 5.3: Leader-follower simulation scenarios for leader's trajectory constraints .............. 58
Table 5.4: Leader-follower simulation scenarios on a flat terrain .......................................... 63
Table 5.5: Leader-follower experimental scenarios on a flat surface ..................................... 71
vii
List of Figures
Figure 1.1: Leader-follower controller research platforms ....................................................... 6
Figure 1.2: Micro-Hydraulic Toolkit [14] ................................................................................ 6
Figure 2.1: MHT sub-assemblies ............................................................................................ 10
Figure 2.2: MHT chassis components .................................................................................... 10
Figure 2.3: Home posture of the MHT ................................................................................... 12
Figure 2.4: MHT vision system .............................................................................................. 13
Figure 2.5: LMS model of the MHT ....................................................................................... 16
Figure 3.1: MHT controller architecture ................................................................................. 18
Figure 3.2: MHT posture parameters ...................................................................................... 19
Figure 3.3: Leader-follower formation framework ................................................................. 22
Figure 3.4: MHT steering kinematics model .......................................................................... 23
Figure 3.5: Game pad of the MHT controller ......................................................................... 31
Figure 3.6: State machine frameworks of the MHT ............................................................... 32
Figure 4.1: Fiducial-based tracking methods .......................................................................... 36
Figure 4.2: Example of model-based tracking [32] ................................................................ 37
Figure 4.3: Example of colour-based tracking [34] ................................................................ 38
Figure 4.4: Example of template and correspondence tracking [36] ...................................... 39
Figure 4.5: Example of SIFT tracking algorithm [39] ............................................................ 40
Figure 4.6: Leader tracking algorithm .................................................................................... 45
Figure 4.7: Position estimation algorithm ............................................................................... 46
Figure 4.8: Estimated range vs. actual range .......................................................................... 48
Figure 4.9: Vision algorithm – long range tracking error ....................................................... 49
Figure 4.10: Estimated bearing vs. actual bearing .................................................................. 50
Figure 4.11: CamShift algorithm - long-term tracking error .................................................. 51
Figure 5.1: MHT steering posture ........................................................................................... 54
viii
Figure 5.2: Turning manoeuvre with 20V differential voltages – results ............................... 56
Figure 5.3: Leader-follower scenario 1 – setup ...................................................................... 58
Figure 5.4: Leader-follower scenario 2 – setup ...................................................................... 58
Figure 5.5: Leader-follower scenario 3 – setup ...................................................................... 58
Figure 5.6: Leader-follower scenario 1 – results .................................................................... 60
Figure 5.7: Leader-follower scenario 2 – results .................................................................... 61
Figure 5.8: Leader-follower scenario 3 – results .................................................................... 62
Figure 5.9: Leader-follower scenario 4 – setup ...................................................................... 63
Figure 5.10: Leader-follower scenario 5 – setup .................................................................... 64
Figure 5.11: Leader-follower scenario 6 – setup .................................................................... 64
Figure 5.12: Leader-follower scenario 7 – setup .................................................................... 65
Figure 5.13: Leader-follower scenario 8 – setup .................................................................... 65
Figure 5.14: Leader-follower scenario 4 – results .................................................................. 66
Figure 5.15: Leader-follower scenario 5 – results .................................................................. 66
Figure 5.16: Leader-follower scenario 6 – results .................................................................. 67
Figure 5.17: Leader-follower scenario 7 – results .................................................................. 67
Figure 5.18: Leader-follower scenario 8 – results .................................................................. 68
Figure 5.19: Leader-follower manoeuvres experimental setup .............................................. 71
Figure 5.20: Leader-follower scenario 9 – setup .................................................................... 71
Figure 5.21: Leader-follower scenario 10 – setup .................................................................. 72
Figure 5.22: Leader-follower scenario 11 – setup .................................................................. 72
Figure 5.23: Leader-follower scenario 12 – setup .................................................................. 73
Figure 5.24: Leader-follower scenario 13 – setup .................................................................. 73
Figure 5.25: Leader-follower scenario 9 – experimental results ............................................ 74
Figure 5.26: Leader-follower scenario 10 – experimental results .......................................... 74
Figure 5.27: Leader-follower scenario 11 – experimental results .......................................... 75
Figure 5.28: Leader-follower scenario 12 – experimental results .......................................... 75
Figure 5.29: Leader-follower scenario 13 – experimental results .......................................... 76
Figure 5.30: Leader-follower manoeuvre on a ramp - leader tracking results........................ 78
Figure 5.31: Leader-follower manoeuvre on a ramp – posture tracking results ..................... 79
Figure 5.32: Snapshots of leader-follower manoeuvre on a ramp .......................................... 80
ix
x
Chapter 1
Introduction
In the last decades, technological advancements in the field of mobile robotics have
given rise to numerous uses of robots in the domestic, commercial and military sectors. Most
recently, researchers have been showing interest in the implementation of follower
behaviours on mobile robots. A robot capable of these behaviours can be tasked to track: a
human, a lead vehicle or another robot. The potential applications of follower robots include:
carrying heavy objects for a human user [1], collaborating with other robots to explore urban
environments [2], and assisting in security patrols [3].
In this thesis, we investigate the leader-follower formation control problem for a
wheel-legged robot. The research presented in the literature for leader-follower control is
mainly done in the context of formation control for multi-robot systems. The formation
control problem consists of manoeuvering individual members of a multi-robot system to
maintain their desired relative positions within the team while the system moves as a whole.
While many methods have been studied by researchers to achieve formation control, the
leader-follower approach has been shown to be the most predominant strategy used to control
teams of mobile robots due to its practicality [4].
1.1 Background
There are multiple strategies that have been developed in the literature to solve the
formation control problem for multi-robot systems. These solutions are mainly derived from
three different approaches: behaviour-based approach, virtual structure approach and leaderfollower approach.
-1-
Chapter 1: Introduction
1.1.1 Behaviour-Based Approach
The behaviour-based approach consists of imposing multiple reactive subtasks, or
behaviours, on each member of a team of mobile robots to achieve a global goal, such as
formation control. The controller of each behaviour issues a command to the robot in order to
complete a single objective, such as formation keeping and trajectory tracking. The resulting
action of the robot is obtained by combining the responses of the behaviours.
An example of behaviour-based approach was proposed by Balch and Arkin [5]. This
method was developed to achieve formation control with a team of mobile robots by
implementing the following behaviours on each robot: maintain the desired position in the
formation, follow the desired trajectory of the system, and avoid collision with other robots.
The final command of each robot is obtained by weighting the relative importance of each
behaviour and combining the behavioural responses.
The main advantage of the behaviour-based approach is the ability to implement
multiple behaviours, such as formation keeping, obstacle avoidance and trajectory tracking,
on each agent of a team of mobiles robots. However, the mathematical formulation of this
approach can be difficult. Moreover, the multiple behaviours implemented on a robot can
lead to contradictory commands which could prevent the robots from converging towards the
desired formation [6].
1.1.2 Virtual Structure Approach
The virtual structure approach models the formation of a team of robots as a rigid
structure. In this method, the desired trajectory of a multi-robot system is assigned to the
system as a whole. Each vehicle of the system is then forced to behave as a particle of the
structure to maintain the desired formation while the system moves along the prescribed path.
An example of the virtual structure approach is presented by Lewis and Tan [7]. In
this method, the controller first establishes the virtual structure of the formation using the
positions of the robots. The controller then determines the desired motion of the structure to
move along the prescribed path of the system. Subsequently, each member of the system
computes its desired trajectory to move the structure and to maintain the desired formation.
Finally, the mobile robots track their desired trajectories as the virtual structure follows its
2
Chapter 1: Introduction
path. Unfortunately, in the experimental demonstration of this virtual structure method, the
team of robots was unable to track the desired trajectory of the formation when one agent
was incapable of moving, as it prevented the virtual structure from advancing without
breaking the formation [7].
The main advantage of the virtual structure approach is that the members of the multirobot system are guaranteed to achieve the desired formation. However, in this approach,
each vehicle must have the knowledge of the positions of the other robots in the formation.
This can lead to high inter-robot communication requirements, which can make this approach
impractical for physical multi-robot systems [7].
1.1.3 Leader-Follower Approach
The leader-follower approach employs a centralized strategy to achieve formation
control. In this approach, a member of the multi-robot system, defined as the leader, moves
along the predefined trajectory of the formation. The other robots, defined as followers, are
required to maintain their desired positions with respect to the leader to achieve formation
control. The main disadvantage of the leader-follower approach is the centralization of the
formation control problem on a single agent of the multi-robot system: if the leader fails to
track the prescribed path of the system, the follower robots will be unable to reach the goal as
well. Nonetheless, due to their simplicity, flexibility and low computational cost, leaderfollower methods have been applied by many researchers to achieve formation control with
teams of mobile robots [8]. Furthermore, the leader-follower control strategies can be applied
to implement human-follower behaviours on mobile robots by defining the human user as the
leader. These advantages make the leader-follower approach an attractive solution to
implement on the research platform presented in this thesis (see Section 1.2).
In [9], Hogg et al. propose a leader-follower control architecture for a small tracked
robot (see Figure 1.1(a)). The objective of the controller is to maintain the follower vehicle at
a constant distance from a designated leader as it moves along its trajectory. To achieve
formation control, the leader periodically sends its position to the follower in order to
generate waypoints for the robot to pursue. The follower’s controller then directs the mobile
robot towards the closest waypoint to track the leader’s trajectory. The leader-follower
3
Chapter 1: Introduction
controller presented in [9] was implemented on a tracked mobile robot (see Figure 1.1(a)) to
follow a desired trajectory while moving at a constant velocity1 [9]. The results of the
experiments show that the robot successfully tracked the prescribed paths. However, this
controller was not tested in an experiment in which the robot needed to track an actual leader.
A virtual vehicle approach was developed by Ghommam et al. [10] to achieve leaderfollower formation control. In this method, the purpose of the controller is to command a
group of mobile robots to maintain a relative range and bearing from a leader. The authors
assume that only the position and orientation of the leader are available to the controller.
Using this information, each follower first determines its desired configuration with respect
to the leader, and its desired velocity and turning rate to maintain the formation. Then, the
controller uses a trajectory tracking algorithm to move each follower robot towards its
desired state in order to achieve leader-follower formation control. To validate the virtual
vehicle approach, the authors implemented a test case in simulation where two mobile robots
were required to follow a leader moving along a prescribed trajectory [10]. The results of the
simulation demonstrate that the robots were able to achieve the desired formation. This
method was only tested in simulation.
In [11], Poonowala et al. developed a feedback leader-follower controller for a team
of differentially driven robots to maintain a relative distance between a follower and a leader.
The controller first uses a vision-based system to estimate the leader’s position and
orientation with respect to the follower. The algorithm then determines the desired position
of the follower, and uses the positioning and heading errors of the robot to calculate the
velocity and steering commands to move the vehicle towards its desired position. This
controller was tested on a multi-robot system composed of two iRobot Creates (see Figure
1.1(b)). In the experiments, the leader robot moved along circular and linear trajectories, and
the follower was required to achieve the desired leader-follower formation [11]. The results
of the tests show that the proposed controller was successfully able to manoeuvre the
follower robot to achieve leader-follower behaviours.
A leader-follower control strategy for a team of mobile robots with velocity
constraints was presented by Consolini et al. [6]. The objective of the controller is to move a
follower vehicle towards a desired range and bearing with respect to a leader. First, the
1
In this thesis, the term velocity is used individually to describe the translational velocity of a platform.
4
Chapter 1: Introduction
authors proposed an alternative approach to the leader-follower formation framework where
the desired range and bearing are measured in the follower’s reference frame instead of the
leader’s reference frame. Then, they developed a leader-follower controller that employs the
configuration of the leader, the velocity of the leader and the positioning errors of the
follower with respect to its desired position to move the follower robot towards the desired
leader-follower formation. The authors also derived constraints for the lead vehicle to ensure
that the follower can maintain the formation. The proposed leader-follower controller was
tested in simulation using a two-robot system [6]. In the simulation, the leader moved along a
circular trajectory and the follower was required to reach and maintain the desired range and
bearing. The simulation results demonstrate that the controller was able to achieve the
desired leader-follower formation. This leader-follower controller was only tested in
simulation.
In [12], Choi and Choi propose a leader-follower controller that uses PID control laws
to achieve formation control. The objective of the controller is to move a follower robot to
reach and maintain a desired range and bearing from the leader. The authors used the same
leader-follower formation framework as Consolini et al. [6]. The leader-follower controller
uses the velocity of the leader and PID control laws with the positioning errors of the
follower to calculate the desired velocity and turning rate of the vehicle to achieve leaderfollower formation control. The controller was tested in an experiment where a follower
robot was required to track a lead vehicle moving along a circular trajectory at constant speed
[12]. The results of the experiment demonstrate that the proposed leader-follower controller
allowed the follower robot to reach and maintain the desired range and bearing with respect
to the leader.
The main issue with the leader-follower control strategies detailed above is their
assumption that the full configuration of the leader, and possibly its velocity and turning rate,
are available to the follower robots to achieve leader-follower behaviours. In real-time
application of multi-robot systems, practical sensors may not provide sufficiently accurate
measurements to satisfy this assumption since the measured data are often subject to noise,
and therefore cannot be used to reliably estimate the full state of the leader. This hinders the
application of the aforementioned leader-follower controllers in real-time systems [12].
5
Chapter 1: Introduction
(a) Packbot [9]
(b) iRobot Create [13]
Figure 1.1: Leader-follower controller research platforms
1.2 Problem Statement
In this thesis, we present a leader-follower controller developed for the MicroHydraulic Toolkit (MHT) (see Figure 1.2). The MHT is a hybrid wheel-legged vehicle
designed and constructed by Defence Research and Development Canada (DRDC) at the
Suffield Research Centre. The design of the Toolkit allows the robot to combine the energy
efficiency of wheeled vehicles and the ability to navigate over complex terrains of legged
robots. The MHT is composed of four legs, each equipped with a hip joint, a knee joint and a
wheel-end effector. This topology allows the robot to actively reconfigure its posture while
navigating on uneven terrains. The components of the MHT are further detailed in Chapter 2.
Figure 1.2: Micro-Hydraulic Toolkit [14]
6
Chapter 1: Introduction
DRDC – Suffield Research Centre developed the MHT to investigate new modes of
locomotion and to study new control methods [15]. The motivation behind the design and the
construction of the Toolkit was to create a platform capable of load-bearing mule duties and
reconnaissance missions in urban environments. Significant amount of work has been carried
out by research personnel at DRDC – Suffield Research Centre and at McGill University on
the fundamental aspects of the kinematics and dynamics modelling, simulation and control of
the MHT. DRDC scientists have studied the range of motion of the Toolkit [16], determined
the dynamic tip-over stability of the robot [17] and investigated different posture control
strategies for the MHT [18]. Thomson [19] successfully created a velocity-level inverse
kinematics controller to control the posture of the Toolkit and tested the controller in
simulation. Türker et al. [20] assessed the performance of the posture controller in simulation
and further investigated the kinematic workspace of the robot on sloped terrains. Li and Sharf
[21] implemented several types of posture reconfiguration manoeuvres on the MHT in
simulation to verify the posture limits of the robot on uneven surfaces. Finally, Wong [14]
implemented the posture controller on the physical platform of the Toolkit, executed
experimental posture reconfiguration manoeuvres to test the controller, and developed and
applied a step-climbing manoeuvre on the MHT.
The purpose of the controller designed in this thesis is to implement leader-follower
behaviours on the MHT. This controller would allow the robot to track a human leader or a
lead vehicle in an urban environment. While basic navigational manoeuvres were
implemented in [14], it was demonstrated that the inverse kinematics controller is unable to
execute a turning motion with the MHT due to the lack of a steering mechanism on the
wheels [20]. Therefore, we designed a separate leader-follower controller that is capable of
steering the Toolkit while working in parallel with the posture controller of the robot. This
would enable the MHT to achieve leader-follower formation control and posture control
simultaneously. In order to implement the controller on the physical robot, we installed a
vision system on the MHT and developed a vision algorithm to allow the robot to estimate
the range and bearing of the leader (see Chapters 2 and 4). The leader-follower controller was
implemented on the Toolkit and tested in a simulation environment as well as on the physical
robot, located in Building 77 at DRDC – Suffield Research Centre.
7
Chapter 1: Introduction
1.3 Thesis Structure
The thesis is organized as follows. Chapter 2 details the main components and the
control systems of the Micro-Hydraulic Toolkit. It also includes a brief summary of the highfidelity model of the MHT in LMS Virtual.Lab Motion used to simulate the behaviours of the
platform. In Chapter 3, we present an overview of the posture controller of the Toolkit and
the development of the leader-follower controller for the robot. We also derive constraints for
the leader’s trajectory to ensure that the MHT is capable of achieving the desired leaderfollower formation. Chapter 4 covers the vision algorithm implemented with the vision
system installed on the robot to track the leader and estimate its range and bearing with
respect to the Toolkit. Chapter 5 details the results of the leader-follower scenarios
implemented in simulation and on the physical platform of the MHT to test the leaderfollower controller. Finally, Chapter 6 summarizes the results of the leader-follower
behaviour implementation on the MHT and discusses potential future work on the Toolkit.
8
Chapter 2
Micro-Hydraulic Toolkit
The Micro-Hydraulic Toolkit is a quadruped wheel-legged robot. Each leg is
composed of a hip joint, a knee joint and a wheel end-effector, which grants the MHT a total
of 12 degrees of freedom. To achieve posture and trajectory control, multiple systems were
integrated on the platform.
In this chapter, we first detail the main structural components of the MHT and their
physical properties (see Section 2.1). Then, the systems used to control and power the robot
are presented (see Section 2.2). Finally, we describe the high-fidelity model of the MHT in
LMS Virtual.Lab Motion used for the simulations (see Section 2.3).
2.1 MHT Structural Components
The MHT is a complex entity that can be separated into five sub-assemblies: a chassis
and four leg assemblies (see Figure 2.1). The chassis is an aluminum structure that houses the
control electronics of the robot. The main controller board and the daughter controller board
are located in the centre and left compartments of the chassis. The inertial measurement unit,
which measures the orientation of the body of the MHT, is installed approximately at the
centre of gravity of the chassis. The servoamplifiers for the electric motors of the wheels are
in the left compartment of the chassis. The chassis also contains an empty space in its centre
acting as future battery compartment (see Figure 2.2).
-9-
Chapter 2: Micro-Hydraulic Toolkit
Figure 2.1: MHT sub-assemblies
The main components of the hydraulic system are located on the chassis. The hydraulic
pump and reservoir are at the back of the chassis, and the accumulator is attached to the front
of the chassis. The hydraulic hip actuators are located inside the chassis along each hip joint.
The hydraulic valves for the hip actuators are in the right compartment of the chassis (see
Figure 2.2). The total weight of the chassis and its dimensions are summarized in Table 2.1.
Figure 2.2: MHT chassis components
10
Chapter 2: Micro-Hydraulic Toolkit
The four leg assemblies of the MHT are attached symmetrically to the chassis with
respect to its sagittal and coronal planes. The components of the legs are named based on an
anatomical terminology. Starting from the chassis, the legs are composed of a hip joint
assembly, a structural segment called femur, a knee joint assembly, a structural segment
called tibia, and a wheel end-effector (see Figure 2.1). The hip and the knee joints are both
powered by hydraulic actuators. They allow the legs to move their segments in parallel to the
sagittal plane of the Toolkit. The wheels are controlled by electric motors (see Section 2.2.1).
A summary of the weights and dimensions of the legs’ components is presented in Table 2.1.
Table 2.1: Dimensions and mass of the MHT's components
Component
Chassis assembly
Hydraulic pump/reservoir
Accumulator
Femur
Knee joint assembly
Tibia
Wheel end-effector
Dimensions
Length: 0.7 m
Width: 0.5 m
Height: 0.23 m
Hip Separation: 0.3 m
Length: 0.2 m
Width: 0.2 m
Height: 0.46 m
Length: 0.45 m
Diameter: 0.18 m
Length: 0.315 m
N/A
Length: 0.377 m
Diameter: 0.254 m
Mass (kg)
76.5
15.9
16.8
0.3
4.9
0.5
4.2
The angles of the hip and knee joints are set to zero when the MHT is in its home
posture. In the home posture, the femurs are parallel to the transverse plane of the chassis and
the tibias are parallel to the coronal plane of the chassis (see Figure 2.3). Given the limits of
the hydraulic actuators, the hip and the knee joints have a range of motion of approximately
100º. The hip joints are able to rotate between −50º and 50º, and the knee joints can move
between −40º and 60º (see Figure 2.3). The design of the hip and knee joint assemblies
allows the joints to be mechanically fastened at 22.5º increments to modify the workspace of
each leg. This feature was not exploited in this thesis nor in the previous work at McGill.
11
Chapter 2: Micro-Hydraulic Toolkit
0.7 m
50°
-50°
0.377 m
-40° 60°
Ø 0.254 m
0.315 m
0.3 m
Figure 2.3: Home posture of the MHT
2.2 Control Systems
2.2.1 Leg Control System
The hip and the knee joint actuators of the MHT are powered by a hydraulic system
located onboard the chassis. The system includes a HydroPerfect International Inc. Series 0100 DC motor pump, a 10 L reservoir, a 3.78 L accumulator and a Hydac EDS 3000
programmable gauge. The gauge controls the hydraulic pump to maintain the pressurized
fluid at approximately 2500 psi when the MHT is operational in order to drive the hip and
knee joints. Each joint assembly of the Toolkit incorporates a DS-Dynatec HPSD9022 noncontinuous hydraulic actuator. The hydraulic actuators are controlled by Wandfluh NG3Mini proportional 4-way spool valves, which were calibrated by Wong [14] to take voltage
commands between ±10V to rotate the hip and the knee joints. The hydraulic valves
controlling the hip actuators are located on the chassis (see Figure 2.2), and the valve for
each knee actuator is placed along the hip joint of the corresponding leg assembly. The
scientists at DRDC – Suffield Research Centre estimated that the hydraulic actuators are
capable of applying a torque of 475 N·m when the hydraulic fluid is pressurized to 2880 psi.
Each hip and knee joint is also equipped with a potentiometer with analog-to-digital
conversion to provide joint angle feedback to the controller of the robot.
12
Chapter 2: Micro-Hydraulic Toolkit
The wheel end-effectors of the MHT are all powered by Maxon EC45 electric motors
coupled with harmonic gear trains of 50:1 ratio. Each motor is controlled by a Maxon 4-QEC Servoamplifier DES70/10 located on the chassis (see Figure 2.2). The servoamplifiers
control the angular velocities of the wheels through voltage commands between ±10V. The
angular velocities of the wheels are measured by US Digital E4P-300-237 quadrature
encoders installed in the wheel end-effectors.
2.2.2 Vision System
Prior to this thesis project, the Toolkit did not have any exteroceptive sensors. The
robot only had proprioceptive sensors to achieve posture control. Since the leader-follower
controller requires the MHT to detect the position of the leader, we installed a vision system
on the robot. The system consists of an IEEE 1394 Flea Camera from Point Grey Research,
Inc. mounted on a Pan-Tilt Unit-D46-17.5 from FLIR Systems, Inc. (see Figure 2.4). The
camera has a 42º horizontal field of view, and a 28º vertical field of view. To maintain the
leader in the field of view of the camera, the pan-tilt unit (PTU) can rotate horizontally
between ±180º, and vertically between −80º and 31º [22].
Figure 2.4: MHT vision system
13
Chapter 2: Micro-Hydraulic Toolkit
The vision system is operated by a laptop located at a control station and connected to
the MHT by an umbilical cord. The laptop uses the Robotic Operating System (ROS) to
capture the images from the camera and control the PTU. In this work, we also employed
ROS to execute a vision algorithm to estimate the position of the leader with respect to the
MHT using the images from the camera (see Chapter 4) and to communicate the results to
the main controller board of the robot. As well, the laptop was used to implement a wireless
game pad to generate high-level control commands to the MHT (see Chapter 3).
2.2.3 Inertial Measurement Unit and Controller Board
To measure the posture of the MHT, an inertial measurement unit (IMU) was
installed at the centre of the chassis. The Toolkit uses a 3DG-M IMU from Microstrain to
measure the pitch and roll angles of the chassis. This IMU also incorporates a magnetometer
to measure the yaw angle of the platform. However, due to magnetic interference caused by
high current wires on the chassis, the yaw angle measurements are unreliable and cannot be
used to control the MHT.
The controllers developed for the Toolkit, and in particular the leader-follower
controller detailed in this thesis, are implemented on the robot through the main controller
board of the robot. The controller board is Phytec’s phyCORE-MPC565 Rapid Development
Kit (RDK), which is built upon the MPC565 microprocessor from Freescale. The RDK is
connected to a control station by a USB cable. The control station uses Matlab and Simulink
to implement controllers on the MHT. To upload the controllers to the microprocessor and
retrieve sensor data, the control station employs Freescale’s Codewarrior program. All the
systems on the Toolkit are connected to the RDK. To drive the actuators, the controller board
is capable of applying voltage commands within approximately ±10V to the leg control
system. Furthermore, the RDK has free channels to install additional sensors or actuators on
the MHT for future development of the platform.
14
Chapter 2: Micro-Hydraulic Toolkit
2.2.4 Power Supplies
The MHT is currently powered by three separate power supplies. In previous work,
Wong [14] found that the high current requirements of the hydraulic system caused large
voltage drops in the other parts of the control system connected to the same power supply.
He also determined that the performance of the sensors suffered when they were connected to
the same power supply as the actuators. Therefore, the Toolkit is currently powered by three
separate power systems located at the control station. The power systems supply energy to
the following control systems separately: the controller boards and the proprioceptive
sensors, the hydraulic valves, and the hydraulic pump and the electric motors of the wheels.
Researchers at DRDC – Suffield Research Centre are presently investigating methods
to power the MHT with an autonomous power supply onboard the chassis. Once an
autonomous power system is found and installed on the robot, the Toolkit will be capable of
autonomous operations in outdoor environments.
2.3 Simulation Environment of the Micro-Hydraulic Toolkit
The leader-follower control strategy presented in this thesis was first tested in
simulation. The simulations were also used to predict the behaviour of the MHT under
different scenarios and to tune the parameters of the leader-follower controller prior to its
implementation on the physical robot. These tests were performed using a high-fidelity
model of the Toolkit in LMS Virtual.Lab Motion (see Figure 2.5). The model was created by
LMS engineers, and includes the MHT’s components, the mass distribution of the robot, the
dynamics of the hydraulic system and the wheel-ground contact forces. In Chapter 5, we
observe that the dynamic properties of the wheels in the model are different from the
properties of the physical platform. These disparities are reflected in the results of the leaderfollower scenarios implemented in simulation and on the physical MHT (see Chapter 5).
Still, LMS Virtual.Lab Motion enabled the implementation and evaluation of many leaderfollower test cases for the MHT in a virtual environment prior to testing on the physical
robot.
15
Chapter 2: Micro-Hydraulic Toolkit
Figure 2.5: LMS model of the MHT
To run simulations, the software LMS Virtual.Lab Motion functions in parallel with
Matlab/Simulink. The controllers developed in Simulink are connected to the LMS model of
the MHT through a Simulink block representing the Toolkit. This block has inputs and
outputs similar to those on the physical MHT platform: 12 desired input voltages for the hip
joints, knee joints and wheel end-effectors, and outputs for the positions and angular
velocities of the joints, the angular velocities of the wheels and the pose of the chassis.
Additional outputs can be added to the LMS model of the MHT to simulate exteroceptive
sensors for the controller.
2.4 Summary
In this chapter, we presented the Micro-Hydraulic Toolkit. First, the structural
components of the robot and their basic properties were detailed. Then, we described the
control systems of the Toolkit and the addition of the vision system to implement leaderfollower behaviour on the MHT. Finally, a brief overview of the high-fidelity model of the
MHT in LMS Virtual.Lab Motion used for the simulations was presented. More detailed and
complementary descriptions of the hardware and the simulation platform can be found in
previous theses completed at McGill University [14, 19].
16
Chapter 3
Leader-Follower Controller
This chapter outlines the architecture of the MHT’s controller. In previous work, a
decoupled posture and trajectory controller was developed for the Toolkit and successfully
tested in simulation [19] and on the physical MHT platform [14]. This controller was based
on the algorithm developed for Hylos [23] due to the similarities in the design of the two
robots. However, contrary to Hylos, the MHT does not have a steering mechanism for its
wheels. Therefore, the trajectory controller of Hylos could not be directly employed on the
Toolkit since it is unable to execute turning manoeuvres with the robot. In this thesis project,
we developed a separate leader-follower controller capable of steering the MHT to achieve
leader-follower formation control. We also derived constraints for the leader’s trajectory
based on the velocity constraints of the MHT to ensure that the robot can reach the desired
leader-follower formation. In the scope of this work, we expanded the state machine
framework originally implemented by Wong [14] to increase the MHT’s autonomous
abilities.
In the following sections, we first present an overview of the architecture of the
MHT’s final controller and a summary of the posture controller (see Section 3.1). Then, we
describe the development of the leader-follower controller, the adaptation of the controller
for the physical MHT platform and the tuning process for the parameters of the controller
(see Section 3.2). Next, we derive the leader’s trajectory constraints for the MHT to be able
to achieve the desired leader-follower formation (see Section 3.3). Finally, in Section 3.4, the
modified state machine framework of the MHT is detailed.
- 17 -
Chapter 3: Leader-Follower Controller
3.1 Controller Overview
The purpose of the MHT’s controller is to drive the joints of the robot to achieve
posture and trajectory control. The controller of the Toolkit is a velocity control-based state
machine that consists of a decoupled posture and trajectory controller (see Figure 3.1).
Trajectory State Machine
w,scaled
Scaling
Function
w
(vxd, ωd)
Steering
Kinematics Model
Leader-Follower
Controller
(ra, γa)
User Control Inputs
MHT
+
d
Δ
+
+
PID Control
Vision System
Joint
Control
Inputs
V
−
Joint
Feedback
a
p
Inverse Kinematics
Model
Δp
+ pd
Posture
State-Machine
pa
Forward
Kinematics
qa
−
Figure 3.1: MHT controller architecture
In Figure 3.1, the posture controller of the MHT consists of the orange blocks and the
trajectory controller is composed of the blue blocks. Both controllers incorporate a state
machine in their structures, which allows the user to implement different modes of operation
on the robot. The state machine framework is further detailed in Section 3.4.
The posture controller of the MHT uses the desired posture (𝐩𝑑 ) coupled with the
actual posture (𝐩𝑎 ) as inputs. The posture parameters of the vehicle are defined in Figure 3.2
and Equation (3.1).
18
Chapter 3: Leader-Follower Controller
ψ
Z
Y
φ
X
zi
xi
Figure 3.2: MHT posture parameters
𝐩 = [𝜙 𝜓 𝑧 𝑥1 𝑥2 𝑥3 𝑥4 𝑧1 𝑧2 𝑧3 𝑧4 ]𝑇
(3.1)
where 𝜙 is the chassis roll angle, 𝜓 is the chassis pitch angle, 𝑧 is the chassis height, and 𝑥𝑖
and 𝑧𝑖 are the 𝑥-and 𝑧-positions of the wheels with respect to the chassis body-fixed frame.
In the implementation of the controller on the MHT, the pitch and roll angles of the chassis
are measured by the IMU (see Chapter 2); the positions of the wheels with respect to the
chassis are computed using the MHT’s forward kinematics model with the angle feedback
from the joints (𝐪𝑎 ). The chassis height is not measured directly; under the assumption that
all wheels are in contact with the ground, the height of the chassis is estimated using the 𝑧positions of the wheels as:
∑4𝑖=1 𝑧𝑖
𝑧=
4
(3.2)
The posture controller uses the inverse kinematics model of the MHT to compute the
desired joint rates of each leg (𝐪̇ 𝑝𝑖 ) to achieve the desired posture. The outputs of the inverse
kinematics model of the posture controller are defined as:
19
Chapter 3: Leader-Follower Controller
𝑻
𝐪̇ 𝑝𝑖 = [𝛼̇ 𝑝𝑖 𝛽̇𝑝𝑖 𝜔𝑝𝑖 ] , 𝑖 = 1, … , 4
(3.3)
where 𝛼̇ 𝑝𝑖 is the desired hip joint rate, 𝛽̇𝑝𝑖 is the desired knee joint rate and 𝜔𝑝𝑖 is the desired
wheel angular velocity for each leg. First, the posture error of the MHT (Δ𝐩) is calculated
using the desired posture (𝐩𝑑 ) and the actual posture (𝐩𝑎 ) of the robot. Then, a proportional
control law is employed to compute the desired platform velocity vector of the Toolkit (𝐯𝑃 )
as follows:
𝐯𝑃 = 𝐂𝑝 Δ𝐩
(3.4)
where 𝐂𝑝 is a selection matrix. Finally, the desired joint rates of each leg are computed using
the inverse kinematics model of the MHT as:
𝐪̇ 𝑝𝑖 = 𝐉𝑖−1 𝐋𝑖 𝐯𝑃 , 𝑖 = 1, … ,4
(3.5)
where 𝐉𝑖−1 is the inverse of the modified Jacobian matrix and 𝐋𝑖 is the modified locomotion
matrix of each leg. In previous work, Thomson [19] further modified the inverse kinematics
control algorithm of Hylos [23] to enable direct control of the 𝑧-positions of the wheels for
the purpose of generating stepping behaviours for the MHT. With this adaptation, the posture
controller can operate in two modes: non-stepping mode and stepping mode. In the nonstepping mode, the posture controller moves the joints of the MHT to maintain the desired
posture of the chassis and the 𝑥-positions of the wheels. In the stepping mode, the posture
controller only controls the 𝑥- and 𝑧-positions of the wheels while ignoring the posture of the
chassis. In [14], Wong adapted the inverse kinematics model to the physical MHT platform
and used the stepping mode of the controller to implement step-climbing manoeuvres on the
robot.
The purpose of the trajectory controller investigated in this thesis is to manoeuvre the
MHT to achieve leader-follower formation control. The leader-follower controller of the
MHT uses the desired range (𝑟𝑑 ) and bearing (𝛾𝑑 ), as defined by the user, and the actual
range (𝑟𝑎 ) and bearing (𝛾𝑎 ) of the leader measured by the vision system to compute the
20
Chapter 3: Leader-Follower Controller
desired velocity (𝑣𝑥𝑑 ) and turning rate (𝜔𝑑 ) of robot to reach the desired formation. Then, the
steering kinematics model of the Toolkit is used to determine the desired wheel angular rates
(combined in 𝐪̇ 𝑤 to match 𝐪̇ 𝑝 ) to achieve the desired velocities. Finally, the desired wheel
rates are scaled down (𝐪̇ 𝑤,𝑠𝑐𝑎𝑙𝑒𝑑 ) to limit the outputs of the trajectory controller to the
maximum angular velocity of the wheels. The leader-follower controller is further detailed in
Section 3.2.
The outputs of the posture and trajectory controllers are combined to obtain the
desired joint rates of the MHT (𝐪̇ 𝑑 ). The desired joint rates are then coupled with the actual
joint rates of the robot (𝐪̇ 𝑎 ) to be converted into the desired joint voltages (combined in 𝐕)
using a PID control law to drive the hips, knees and wheels of the MHT.
3.2 Leader-Follower Controller
In previous work [14, 19], the inverse kinematics model was also used to implement
limited trajectory control on the MHT to move the robot along a linear path. However, it was
demonstrated by Türker et al. [20] that the trajectory controller of Hylos [23] was not capable
of executing turning manoeuvres with the Toolkit since the robot does not have a steering
mechanism. Hence, we developed a separate leader-follower controller for the MHT that can
steer the robot to achieve leader-follower formation control (see Section 3.2.1). The leaderfollower controller was implemented in parallel with the posture controller to allow the MHT
to achieve leader-follower behaviour and posture control simultaneously. The controller was
initially tested in simulation. To be applied on the physical platform, the controller required
several modifications, similarly to the adaptations of the posture controller in [14] (see
Section 3.2.2). Finally, in Section 3.2.3, we present the method used to tune the control
parameters of the leader-follower controller for the MHT.
3.2.1 Leader-Follower Controller Development
In light of the practicality of the vision system chosen for the Toolkit (see Chapter 4),
we assumed that only the range and the bearing information of the leader with respect to the
MHT are available for the robot to achieve leader-follower formation control. Based on this
21
Chapter 3: Leader-Follower Controller
assumption, we constructed the leader-follower framework for the MHT building on the
work of Choi and Choi [12] for formation control of multiple mobile robots. The objective of
the leader-follower controller is to direct the Toolkit to reach and maintain a desired range
(𝑟𝑑 ) and bearing (𝛾𝑑 ) from the leader specified in the reference frame of the robot (see Figure
3.3). Let the configurations of the leader and of the follower be defined as follows:
𝐑 = [𝑋
𝜃]𝑇 = [𝐏 𝑇
𝑌
𝜃]𝑇
(3.6)
where 𝑋 and 𝑌 represent the leader’s or the follower’s position in the inertial reference
frame, and 𝜃 describes its orientation. Then, the follower is said to have achieved the desired
leader-follower formation when Equation (3.7) is satisfied, i.e.:
𝐏𝐹 = 𝐏𝐿 − 𝑟𝑑 𝐮𝑟 (𝜃𝐹 + 𝛾𝑑 )
(3.7)
where 𝐮𝑟 (𝜃𝐹 + 𝛾𝑑 ) is a unit vector with orientation 𝜃𝐹 + 𝛾𝑑 , i.e., in the direction from the
follower’s desired position to the leader.
Desired Position
ex
θL
γd
θF
PL
rdur(θF+γd)
raur(θF+γa)
γa
x
y
Y
ey
θF
PF
X
Figure 3.3: Leader-follower formation framework
22
Chapter 3: Leader-Follower Controller
The MHT, being a skid-steering vehicle, can only turn by driving its left and right
wheels at different angular velocities. However, turning manoeuvres cause the wheels to
skid, which is not possible to capture in the kinematics model of the robot. Different methods
have been developed in the literature to control skid-steering robots [24]. Unfortunately, they
require the knowledge of the wheel-ground contact forces and torque control of the wheels,
which is not possible for the MHT’s current controller. To steer the MHT, we assumed that
the skidding effect of the wheels is negligible and that the kinematics of the MHT can be
sufficiently represented with the kinematics model of a two-wheel differentially driven
vehicle, in particular:
𝑟𝑤
(𝜔𝑅 + 𝜔𝐿 )
2
𝑟𝑤
𝜔 = (𝜔𝑅 − 𝜔𝐿 )
2𝑐
𝑣𝑥 =
(3.8)
(3.9)
where 𝑣𝑥 is the velocity of the MHT in its body-fixed 𝑥-direction, 𝜔 is its turning rate, 𝜔𝑅 is
the angular rate of the right wheels, 𝜔𝐿 is the angular rate of the left wheels, 𝑟𝑤 represents the
radius of the wheels and 𝑐 is half of the chassis width (see Figure 3.4).
Figure 3.4: MHT steering kinematics model
23
Chapter 3: Leader-Follower Controller
Having defined the leader-follower problem and simplified steering kinematics of the
MHT for wheeled locomotion, we can now develop the leader-follower controller. First, we
define the positioning errors of the MHT with respect to its desired position, the latter
defined by the desired range and bearing in the reference frame of the robot (see Figure 3.3).
Using the actual range (𝑟𝑎 ) and bearing (𝛾𝑎 ) of the leader, the longitudinal (𝑒𝑥 ) and lateral
positioning errors (𝑒𝑦 ) are computed as done by Choi and Choi [12]:
𝑒𝑥 = 𝑟𝑎 cos 𝛾𝑎 − 𝑟𝑑 cos 𝛾𝑑
(3.10)
𝑒𝑦 = 𝑟𝑎 sin 𝛾𝑎 − 𝑟𝑑 sin 𝛾𝑑
(3.11)
Next, the leader-follower controller determines the desired velocity (𝑣𝑥𝑑 ) and turning
rate (𝜔𝑑 ) of the MHT to reach its desired position. The desired velocity is computed with a
PID control law using the longitudinal positioning error. The desired turning rate of the MHT
is calculated using the lateral positioning error with a PD control law combined with an
integral term from [25] to compensate for the skidding resistance of the wheels. The control
laws for the velocity and turning rate of the MHT are thus defined as:
𝑣𝑥𝑑 = 𝐾𝑃𝑥 𝑒𝑥 + 𝐾𝐼𝑥 ∫ 𝑒𝑥 𝑑𝑡 + 𝐾𝐷𝑥
𝑑𝑒𝑥
𝑑𝑡
𝜔𝑑 = 𝐾𝑃𝑦 𝑒𝑦 + 𝐾𝐼𝑦 𝑠𝑖𝑔𝑛(𝑒𝑦 ) ∫ 𝑒𝑦 𝑠𝑖𝑔𝑛(𝑒𝑦 ) 𝑑𝑡 + 𝐾𝐷𝑦
(3.12)
𝑑𝑒𝑦
𝑑𝑡
(3.13)
where 𝐾𝑃𝑥 , 𝐾𝐼𝑥 , 𝐾𝐷𝑥 , 𝐾𝑃𝑦 , 𝐾𝐼𝑦 and 𝐾𝐷𝑦 are the gains of the leader-follower controller.
Finally, the desired rates for the left (𝜔𝐿 ) and right wheels (𝜔𝑅 ) of the MHT are
calculated from the results of Equations (3.12) and (3.13) using the steering kinematics
model of the MHT, defined by Equations (3.8) and (3.9). The desired left and right wheel
rates are:
24
𝜔𝐿 =
𝑣𝑥𝑑 − 𝑐𝜔𝑑
𝑟𝑤
(3.14)
𝜔𝑅 =
𝑣𝑥𝑑 + 𝑐𝜔𝑑
𝑟𝑤
(3.15)
Chapter 3: Leader-Follower Controller
The outputs of the leader-follower controller are subsequently combined with the desired
wheel angular rates computed by the posture controller of the MHT, as illustrated in Figure
3.1. Lastly, a PID control law is applied to the joint rate error to obtain the voltage commands
to all the joints of the robot. In order to prevent the integral term of the PID control law for
the voltage commands from overcompensating for the desired wheel velocities, we added a
function block to limit the outputs of the leader-follower controller to the maximum angular
velocities of the wheels (see Figure 3.1).
3.2.2 Leader-Follower Controller Adaptations
The leader-follower controller was developed in Matlab/Simulink and was first tested
in co-simulation with LMS Virtual.Lab Motion. To implement the controller on the physical
MHT platform, adaptations are required. As explained in [14], the simulation environment
acts as a continuous system while the physical robot is a discrete system. Therefore, the
Simulink blocks of the controller used for the simulations must be compatible with a discrete
system when the controller is implemented on the MHT prototype. All derivative and the
integer blocks of the MHT’s leader-follower controller are to be switched to their discrete
counterparts. Finally, the user must ensure that all operations of the Simulink controller
respect the fundamental time step of the RDK (0.00078125 s), which was defined by Wong
[14] with the frequency at which the MPC 565 can operate the controller of the Toolkit.
3.2.3 Leader-Follower Controller PID Tuning
For the leader-follower controller to achieve adequate performance, the gains in
Equations (3.12) and (3.13) must be tuned properly. Two sets of experiments were conducted
to calibrate the parameters. In the first set of tests, we fixed the leader at a specific range in
front the MHT, and the robot was required to advance towards the target to reduce the
longitudinal positioning error. In the second set of experiments, the leader was placed at a
specific range and bearing from the Toolkit, and the platform was only instructed to turn in
the direction of the leader to decrease the lateral positioning error. Using the plots of the
positioning errors, we tuned the PID gains of the leader-follower controller using the steps
outlined in [26]:
25
Chapter 3: Leader-Follower Controller
1. Initially set the integral gain (𝐾𝐼 ) and the derivative gain (𝐾𝐷 ) to zero, and set a low
proportional gain (𝐾𝑃 ).
2. Slowly increase 𝐾𝑃 until the positioning error achieves a 10% overshoot.
3. Increase 𝐾𝐷 until there is no more overshooting of the positioning error.
4. Increase 𝐾𝐼 until the positioning error reaches a 15% overshoot.
This method allowed us to hand-tune the gains of the leader-follower controller on the
physical platform of the MHT. In the leader-follower scenarios presented in Chapter 5, we
employed the same gains calibrated on the physical robot in the simulations in order to
compare the performance of the leader-follower controller under similar conditions.
3.3 Leader’s Trajectory Constraints
Since the velocity and turning rate of the MHT are bounded, we must define the
limits of the leader’s velocities to ensure that the Toolkit can reach the desired leaderfollower formation. Let the velocity and turning rate be denoted by 𝑣𝐿 and 𝜔𝐿 for the leader,
and 𝑣𝐹 and 𝜔𝐹 for the follower. We can assume that the leader and follower follow the
kinematics model of a unicycle in the inertial frame, defined as follows:
. .
.
cos 𝜃
𝑋̇
[ 𝑌̇ ] = [ sin 𝜃
̇
. 0
.𝜃 .
0. . 𝑣 .
0][𝜔]
.
1. .
(3.16)
We characterize the trajectory curvature of the leader with 𝑘𝐿 ; it can be calculated for a nonzero velocity of the leader as:
𝑘𝐿 =
𝜔𝐿
𝑣𝐿
(3.17)
The kinematic constraints of the follower are characterized by its maximum velocity and
turning rate, represented by (𝑉𝐹 , Ω𝐹 ). To derive the leader’s trajectory constraints, we thus
assume that the follower satisfies its kinematic constraints when the following conditions are
respected:
26
Chapter 3: Leader-Follower Controller
0 < 𝑣𝐹 ≤ 𝑉𝐹
(3.18)
−Ω𝐹 ≤ 𝜔𝐹 ≤ Ω𝐹
(3.19)
Let the positioning error vector of the follower in the global inertial frame be defined by 𝐄.
Using Equation (3.7), the positioning error of the follower is expressed as:
𝐄 = 𝐏𝐹 − (𝐏𝐿 − 𝑟𝑑 𝐮𝑟 (𝜃𝐹 + 𝛾𝑑 ))
(3.20)
In order to derive the trajectory constraints for the leader, it is first necessary to
determine the ideal velocities of the follower to maintain the leader-follower formation.
Under the assumption that the follower is initially in formation (𝐄 = 𝟎), we can determine
the follower’s desired velocities to maintain the formation using 𝐄̇ = 𝟎. By differentiating
Equation (3.20) and applying Equation (3.16), the following equation can be found:
𝑣𝐿 𝐮𝑟 (𝜃𝐿 ) = 𝑣𝐹 𝐮𝑟 (𝜃𝐹 ) + 𝑟𝑑 𝜔𝐹 𝐮𝑡 (𝜃𝐹 + 𝛾𝑑 )
(3.21)
where 𝐮𝑡 (𝜃𝐹 + 𝛾𝑑 ) represents an orthogonal unit vector with respect to 𝐮𝑟 (𝜃𝐹 + 𝛾𝑑 ). By
multiplying the above equation by the rotation matrix:
.
𝐑(−𝜃𝐹 ) = [
cos 𝜃𝐹
−sin 𝜃𝐹
.
sin 𝜃𝐹 .
]
cos 𝜃𝐹
(3.22)
.
the subsequent equation is obtained:
. .
1
𝑣𝐿 𝐮𝑟 (𝜃𝐿 − 𝜃𝐹 ) = 𝑣𝐹 [ ] + 𝑟𝑑 𝜔𝐹 𝐮𝑡 (𝛾𝑑 )
.0.
(3.23)
Using Equation (3.23), we can define the ideal velocities of the follower to maintain the
leader-follower formation as follows:
27
Chapter 3: Leader-Follower Controller
𝑣𝐹 = 𝑣𝐿
cos(𝜃𝐿 − 𝜃𝐹 − 𝛾𝑑 )
cos(𝛾𝑑 )
(3.24)
sin(𝜃𝐿 − 𝜃𝐹 )
𝑟𝑑 cos(𝛾𝑑 )
(3.25)
𝜔𝐹 = 𝑣𝐿
which is the same control law developed by Consolini et al. [6].
Based on Equations (3.24) and (3.25), we observe that the follower’s ability to
maintain the formation depends on the velocity of the leader, and the difference in orientation
between the leader and the follower (𝜃𝐿 − 𝜃𝐹 ). To respect the follower’s kinematic constraint
(3.18), assuming 𝑟𝑑 > 0 and |𝛾𝑑 | < 𝜋/2, the following condition must be respected:
|𝜃𝐿 − 𝜃𝐹 | < 𝜋/2
(3.26)
Consolini et al. [6] established a correlation between (𝜃𝐿 − 𝜃𝐹 ), the leader’s trajectory
curvature (𝑘𝐿 ) and the desired leader-follower formation. Letting 𝛽 be defined as:
𝜋 𝜋
𝛽 = 𝜃𝐿 − 𝜃𝐹 ∈ ]− , [
2 2
(3.27)
we differentiate 𝛽 and apply Equation (3.25) to obtain:
𝛽̇ =
𝑣𝐿
(𝑘 𝑟 cos(𝛾𝑑 ) − sin(𝛽))
𝑟𝑑 cos(𝛾𝑑 ) 𝐿 𝑑
(3.28)
Consolini et al. [6] determined the following solutions for 𝛽:
a) If |𝑘𝐿 𝑟𝑑 cos(𝛾𝑑 )| > 1, ∀𝑡, then lim𝑡→∞ |𝛽| = ∞.
b) If 𝑘𝐿 is constant and |𝑘𝐿 |𝑟𝑑 cos(𝛾𝑑 ) ≤ 1, then
lim𝑡→∞ 𝛽 = arcsin( 𝑘𝐿 𝑟𝑑 cos(𝛾𝑑 )).
Based on the definition of 𝛽, solution (a) is not feasible. Therefore, the trajectory constraint
for the leader’s curvature derived using solution (b) is:
28
Chapter 3: Leader-Follower Controller
−
1
1
≤ 𝑘𝐿 ≤
𝑟𝑑 cos(𝛾𝑑 )
𝑟𝑑 cos(𝛾𝑑 )
(3.29)
Assuming that the leader moves along a constant curvature 𝑘𝐿 at a constant velocity 𝑣𝐿 > 0,
and that the conditions for solution (b) are respected, Equation (3.25) can be applied as
follows:
lim 𝜔𝐹 = lim 𝑣𝐿
𝑡→∞
𝑡→∞
= 𝑣𝐿
sin(𝜃𝐿 − 𝜃𝐹 )
𝑟𝑑 cos(𝛾𝑑 )
𝑘𝐿 𝑟𝑑 cos(𝛾𝑑 )
𝑟𝑑 cos(𝛾𝑑 )
= 𝜔𝐿
(3.30)
Based on the solution derived above, for the follower to respect its kinematic constraints Ω𝐹
while maintaining the desired leader-follower formation, the following condition must be
respected.
−Ω𝐹 ≤ 𝜔𝐿 ≤ Ω𝐹
(3.31)
Similarly, Equation (3.24) can also be applied with solution (b) as:
lim 𝑣𝐹 = lim 𝑣𝐿
𝑡→∞
𝑡→∞
cos(𝜃𝐿 − 𝜃𝐹 − 𝛾𝑑 )
cos(𝛾𝑑 )
𝑣𝐿
lim cos(𝛽) cos(𝛾𝑑 ) + sin(𝛽) sin(𝛾𝑑 )
cos(𝛾𝑑 ) 𝑡→∞
𝑣𝐿
=
(√1 − (𝑘𝐿 𝑟𝑑 cos(𝛾𝑑 ))2 cos(𝛾𝑑 ) + 𝑘𝐿 𝑟𝑑 cos(𝛾𝑑 ) sin(𝛾𝑑 ))
cos(𝛾𝑑 )
=
(3.32)
Therefore, to maintain the formation while respecting the constraint (3.18), the follower’s
velocity and trajectory curvature must satisfy the condition:
29
Chapter 3: Leader-Follower Controller
𝑉𝐹 cos(𝛾𝑑 ) ≥ 𝑣𝐿 (√1 − (𝑘𝐿 𝑟𝑑 cos(𝛾𝑑 ))2 cos(𝛾𝑑 )
+ 𝑘𝐿 𝑟𝑑 cos(𝛾𝑑 ) sin(𝛾𝑑 ))
(3.33)
To simplify the constraint for the velocity of the leader, we can use a more conservative
constraint from Equation (3.33) as:
𝑉𝐹 cos(𝛾𝑑 ) ≥ 𝑣𝐿
(3.34)
With the above development, we have therefore established that when the leader respects the
trajectory constraints (3.29), (3.31) and (3.34), the MHT will be able to achieve the desired
range and bearing with respect to the leader.
3.4 State Machine
The state machine framework was initially implemented in the MHT’s controller by
Wong [14] to allow easy transitions between different modes and behaviours of the robot.
The previous state machine only used the proprioceptive sensors of the MHT and a timer to
define the state transition conditions, which limited the autonomous abilities of the platform
(see Figure 3.6(a)). In this thesis, we have expanded the state machine framework with the
addition of exteroceptive sensors on the Toolkit. Alongside the vision system, we installed a
game pad which allows users to operate the MHT remotely (see Figure 3.5). The
communication protocol for the game pad with the MPC 565 was implemented by O’Reilly
[27] in ROS (see Chapter 2). We modified the state transition conditions of the previous state
machine to allow the user to control the MHT’s state transitions. We also implemented a
state machine for the trajectory controller to work in parallel with the state machine of the
posture controller. The state machine of the trajectory controller enables the user to remotely
control the velocity and turning rate of the robot when the leader-follower controller is not
used (see Figure 3.1). Using the two state machines, we implemented different modes of
operation on the MHT. The improved state machine framework allows the user to actively
switch between the different modes of operation of the robot (see Figure 3.6(b))
30
Chapter 3: Leader-Follower Controller
Pause
Start-Up Mode
Remote Control
Mode
Leader-Follower
on Ramp Mode
Leader-Follower
Mode
Remote Control
on Ramp Mode
Ending Mode
Experimental
Mode
Velocity Control for
Remote-Control Mode
Homing
Mode
Turning Rate Control for
Remote-Control Mode
Figure 3.5: Game pad of the MHT controller
In this thesis, we mainly focus on three modes of operation implemented on the MHT
for the leader-follower experiments: start-up mode, leader-follower mode and ending mode.
The descriptions of the three modes are:
1. Start-Up Mode: Developed by Wong [14], this is the mode where the MHT
reconfigures from its initial posture to the home posture while remaining stationary.
The state machines in the final controller are designed to execute the start-up
sequence when the MHT is first powered on to allow all of its systems to power up.
2. Leader-Follower Mode: The MHT first reconfigures its posture to the steering posture
(detailed in Chapter 5) while remaining stationary. When the posture reconfiguration
is completed, the leader-follower controller is activated to direct the MHT toward its
desired position with respect to the leader while the posture controller maintains the
steering posture.
3. Ending Mode: This mode was developed in this thesis to power off the MHT at the
end of each experiment. The MHT slowly lowers the chassis while spreading its legs
until the chassis rests on the ground.
In order to avoid unstable manoeuvres, several additional state transition conditions were
implemented in ROS by O’Reilly [27]. These conditions use logical operations to prevent the
user from accidentally initiating a state transition that would cause the MHT to collapse. For
31
Chapter 3: Leader-Follower Controller
example, the state machine only allows the MHT to change from the Ending Mode to the
Start-Up Mode (see Figure 3.6(b)).
Manoeuvre
#1
Time Transition
Condition
Time Transition
Condition
Manoeuvre
#2
Time Transition
Condition
Power On
...
Start-Up
Sequence
End
Manoeuvre
Time Transition
Condition
Manoeuvre
#n
Time Transition
Condition
Time Transition
Condition
Manoeuvre
#n
(a) Previous state machine diagram
Leader-Follower
Mode
User Defined
Transition
Power On
Remote Control
Mode
User Defined
Transition
Ending
Mode
Start-Up
Mode
...
Leader-Follower
on Ramp Mode
(b) Current state machine diagram
Figure 3.6: State machine frameworks of the MHT
32
User Defined
Transition
Chapter 3: Leader-Follower Controller
3.5 Summary
In this chapter, the controller architecture of the MHT was presented. First, we briefly
reviewed the posture controller of the Toolkit. Then, the development of the leader-follower
controller was detailed. In order to implement the final controller on the physical MHT
platform, adaptations were made to the controller to work with a discrete system. We also
described the method used to tune the gains of the leader-follower controller. Next, we
derived trajectory constraints for the leader to ensure that the MHT can achieve the desired
range and bearing with respect to the leader. Finally, we presented the state machine
framework implemented in the MHT’s controller, which allows the user to remotely control
the different modes of operation of the MHT.
33
Chapter 3: Leader-Follower Controller
34
Chapter 4
Vision Algorithm
This chapter details the vision algorithm implemented on the MHT to track the leader.
To achieve leader-follower behaviour, we installed a vision system on the chassis of the
Toolkit composed of a monocular camera mounted on a pan-tilt unit (see Chapter 2). The
main advantage of using a vision-based system to follow the leader is its ability to process a
large amount of information with a relatively small system. Additionally, the camera is a
passive sensor, which means that it can be used to sense the surroundings of the MHT
without interfering with the environment. The camera and the PTU are operated by the laptop
located at the control station of the MHT. In this work, we developed a vision algorithm to
track the leader in the field of view of the camera and to estimate its relative position with
respect to the MHT.
In the coming sections, we first review different object tracking methods that use
monocular cameras (see Section 4.1). Then, we present the leader tracking algorithm
implemented with the vision system of the MHT and the approach used to measure the
position of the leader with respect to the robot (see Section 4.2). In the final section of this
chapter, we discuss the performance of the vision algorithm.
4.1 Review of Object Tracking Methods
The objective of the vision system on a mobile robot such as MHT can be partitioned
into two sub-objectives: to identify the leader in the field of the camera and to estimate the
position of the leader with respect to the MHT. To track the leader with a monocular camera,
we investigated different object tracking algorithms. There are many methods that have been
- 35 –
Chapter 4: Vision Algorithm
investigated by researchers and in industry to track a moving object using a single camera. In
this section, we focus on the existing object tracking algorithms for solid objects. The
methods reviewed can be classified based on the properties of the object of interest used for
tracking. We investigated three categories of object tracking methods: fiducial-based
tracking, model-based tracking and appearance-based tracking.
4.1.1 Fiducial-Based Tracking
Fiducial-based object tracking consists of adding visual markers to the scene (the
leader in our case) to track an object. The markers, or fiducials, are composed of distinctive
visual features that are easy to detect in the images from the camera (see Figure 4.1(a)).
Fiducial-based tracking algorithms can also be applied to determine the pose of the object of
interest with respect to the camera by comparing the perceived geometry in the image with
its actual geometry [28] (see Figure 4.1(b)).
(a) Example of fiducials [29]
(b) Example of fiducial-based pose estimation [30]
Figure 4.1: Fiducial-based tracking methods
However, the implementation of fiducial-based tracking methods must be very specific to the
selected markers. Therefore, they cannot be easily adapted to a different target. Furthermore,
the fiducial-based tracking methods lack robustness against noise in the image and partial
occlusion of the markers [31].
36
Chapter 4: Vision Algorithm
4.1.2 Model-Based Tracking
Model-based tracking methods use the knowledge of the 3D structure of an object to
detect it in the image. The representation of the object can be a CAD model, a set of planes
or a rough ellipsoid model of the object. The 3D knowledge of the object allows the
algorithm to recognize an object from any point of view and to determine its pose with
respect to the camera. To identify the object in each frame, the model-based methods match
the projection of the 3D representation of the object to its perceived corners or edges in the
image (see Figure 4.2).
(a) Image sequence
(b) Perceived edges of the
(c) Model matched to the
object in the image
object in the image
Figure 4.2: Example of model-based tracking [32]
The disadvantages of the model-based tracking methods are their complexity and their lack
of robustness to image noise. As is the case with fiducial-based tracking, the model-based
methods can also fail when the object of interest is partially occluded [28].
4.1.3 Appearance-Based Tracking
Appearance-based tracking methods employ the visual characteristics of an object,
such as its colour, shape and texture, to identify the object in each frame from the camera.
These methods are more robust than the previous two categories to image noise and partial
occlusion. However, it is more difficult to determine the pose of the object with respect to the
camera using appearance-based tracking. Furthermore, the complexity of each method varies
37
Chapter 4: Vision Algorithm
depending on the visual features used to detect the object. The appearance-based methods
reviewed in this section are: colour-based tracking, template and correspondence, and feature
points-based methods.
The colour-based tracking methods detect an object by identifying the areas in the
image of similar colours to the colours of the object (see Figure 4.3). They are very simple to
implement and computationally inexpensive. The colour-based tracking algorithms are also
robust to scaling and partial occlusion of the object. Unfortunately, they are fragile to
changes in the illumination of the object. Moreover, if an object of similar colour enters the
field of view of the camera, the algorithm might track the new object instead of the object of
interest. To increase the robustness of colour-based tracking, researchers have developed
adaptive colour recognition algorithms, such as the Mean Shift and the CamShift algorithms
[33]. However, the adaptive algorithms can fail over long-term tracking and are still not
robust to illumination changes [31].
Figure 4.3: Example of colour-based tracking [34]
The second category of appearance-based methods reviewed here is the template and
correspondence methods. This approach consists of using trained image templates of an
object to detect the object in the frames of the camera (see Figure 4.4). The template and
correspondence algorithms search each image for a region with similar appearance to the
templates of the object. An example of template and correspondence tracking is presented in
[35], where an object is detected by minimizing the sum-of-squared-differences between the
template and the image from the camera. The main advantage of the template and
correspondence methods is their ability to detect a variety of objects. However, they are
38
Chapter 4: Vision Algorithm
computationally expensive and not robust to partial occlusion of the object of interest in the
image. Furthermore, their robustness to illumination change and scaling depends on the
number of trained templates used.
(a) Template
(b) Image
Figure 4.4: Example of template and correspondence tracking [36]
The last category of appearance-based methods reviewed in this section is the feature
points-based methods. These methods use multiple small visual features of an object to track
it in the images from the camera. Ideally, the feature points used are invariant to illumination
variations and to the changes of the point of view of the object of interest. By determining the
relationship between the feature points of an object in the image, the feature points-based
methods can identify the object and estimate its pose with respect to the camera. Multiple
algorithms have been developed by researchers for object tracking using visual features. For
example, the Harris corner detector [37], the Lucas Kanade method [38] and the Scale
Invariant Feature Transform (SIFT) algorithm [39] are all feature points-based methods that
have been applied in the computer vision community. The SIFT algorithm has also been used
for other applications, such as object recognition, due to its robustness to illumination
changes, scaling, partial occlusion and changes in the orientation of the object (see Figure
4.5). However, the feature points-based methods are complex and can be computationally
expensive. More importantly, they require a textured object with many features and high
resolution images from the camera to properly achieve object tracking [31].
39
Chapter 4: Vision Algorithm
Figure 4.5: Example of SIFT tracking algorithm [39]
4.2 Vision Algorithm Development
Each method presented in the previous section has its strengths and weaknesses. The
selection of the most suitable method to track the leader in our leader-follower framework
depends on the properties of the vision system and the visual characteristics of the leader.
The IEEE 1394 Flea camera installed on the Toolkit only captures low quality images of the
leader since its lens cannot refocus automatically (see Chapter 2). Therefore, model-based
methods and fiducial-based methods were not considered since they both require highresolution images of the leader. In our work, we defined the leader as a purple ball of
uniform colour to simplify the leader tracking problem as well as the estimation of the
leader’s range and bearing with respect to the MHT. At the present time, the environment in
which the Toolkit is tested has constant lighting conditions, which limits the effect of
illumination changes on the ball. Thus, we decided to implement the colour-based tracking
methods due to their simplicity and speed.
The vision solution implemented on the MHT incorporates two algorithms: a leader
tracking algorithm and a position estimation algorithm. First, we implemented the
Continuously Adaptive Mean Shift (CamShift) algorithm to detect the area of the image
40
Chapter 4: Vision Algorithm
where the leader is located. Then, we applied a simple colour-based detection method
combined with a contour detection algorithm to estimate the range and bearing of the ball
with respect to the MHT using its perceived size and position in the field of view of the
camera.
4.2.1 Leader Tracking Algorithm
The CamShift algorithm is an adaptive colour recognition algorithm that identifies the
region in the image where the object of interest is located. It is an adaptation of the Mean
Shift algorithm. The main difference between the Mean Shift algorithm and the CamShift
algorithm is that the latter recalculates the dimensions of the region of interest in each frame
of the camera. Both algorithms use the probability distribution images generated from the
images of the camera to find the location of the object. The probability distribution images
are composed of pixels whose values represent the probability that the pixel belongs to the
targeted object. A common method to generate the probability distribution images for the
CamShift algorithm is the histogram back-projection [33]. In this method, the user first
defines a normalized histogram representing the colours of the object of interest in the
images. This histogram is quantified by bins that group pixels of similar colours together. In
this thesis, we generated the initial histogram (ℎ̂) of the leader using the Hue plane of the
Hue, Saturation and Value (HSV) space of its image. The histogram is computed using the
equation:
𝑛
ℎ̂𝑢 = ∑ 𝛿[𝑘(𝑥𝑖∗ ) − 𝑢] , 𝑢 = 1, … , 𝑚
(4.1)
𝑖=1
where {ℎ̂𝑢 }𝑢=1,…,𝑚 represents the set of bins of the histogram, {𝑥𝑖∗ }𝑖=1,…,𝑛 represents the
locations of the pixels in the region of interest of 𝑛 pixels, and 𝑘(𝑥𝑖∗ ) is a function that
associates the pixel at location 𝑥𝑖∗ to its proper bin in the histogram. Then, the normalized
histogram (𝑔̂) is computed as follows:
41
Chapter 4: Vision Algorithm
𝑔̂𝑢 = min (
255
ℎ̂𝑢 , 255) , 𝑢 = 1, … , 𝑚
max(ℎ̂)
(4.2)
Equation (4.2) normalizes the histogram to a range of [0, 255], which represents the intensity
range of the probability distribution image used by the CamShift algorithm [33]. Once the
normalized histogram is computed, the histogram back-projection generates the probability
distribution images by associating each pixel of the images from the camera to the value of
their corresponding bin in the normalized histogram.
After the probability distribution image is obtained, the CamShift algorithm uses the
Mean Shift algorithm to determine the location of the region in which the leader is located.
Using an initial search window obtained by the user during the initialization of the algorithm,
the Mean Shift algorithm computes the location of the ball in the region by first calculating
the zeroth and the first moments of the search window with the following equations:
𝑀00 = ∑ ∑ 𝐼(𝑥, 𝑦)
𝑥
𝑀10 = ∑ ∑ 𝑥𝐼(𝑥, 𝑦)
𝑥
(4.4)
𝑦
𝑀01 = ∑ ∑ 𝑦𝐼(𝑥, 𝑦)
𝑥
(4.3)
𝑦
(4.5)
𝑦
where 𝐼(𝑥, 𝑦) is the intensity of the probability distribution image at the position (𝑥, 𝑦). The
mean location of the region of interest (𝑥𝑐 , 𝑦𝑐 ) is then updated with the equations below:
𝑥𝑐 =
𝑀10
𝑀00
(4.6)
𝑦𝑐 =
𝑀01
𝑀00
(4.7)
Once the location of the region of interest is computed, the Mean Shift algorithm recalculates
the mean location of the ball in the new search window. This process is repeated until there
42
Chapter 4: Vision Algorithm
are no significant shifts in the position of the region or until the maximum number of
iterations is reached.
Finally, the CamShift algorithm updates the size of the region of interest to match the
size of the object in the image. To do so, the algorithm computes the second moments of the
region of interest identified by the Mean Shift algorithm as:
𝑀20 = ∑ ∑ 𝑥 2 𝐼(𝑥, 𝑦)
𝑥
𝑀02 = ∑ ∑ 𝑦 2 𝐼(𝑥, 𝑦)
𝑥
(4.9)
𝑦
𝑀11 = ∑ ∑ 𝑥𝑦𝐼(𝑥, 𝑦)
𝑥
(4.8)
𝑦
(4.10)
𝑦
Then, the CamShift algorithm calculates the intermediary variables using the moments of the
region of interest with the following equations:
𝑑=
𝑒 = 2(
𝑓=
𝑀20
− 𝑥𝑐2
𝑀00
(4.11)
𝑀11
− 𝑥𝑐 𝑦𝑐 )
𝑀00
(4.12)
𝑀02
− 𝑦𝑐2
𝑀00
(4.13)
Finally, the width (𝑤) and the height (ℎ) of the region containing the leader are calculated
using:
(𝑑 + 𝑓) − √𝑒 2 + (𝑑 − 𝑓)2
2
(4.14)
(𝑑 + 𝑓) + √𝑒 2 + (𝑑 − 𝑓)2
2
(4.15)
𝑤=√
ℎ=√
43
Chapter 4: Vision Algorithm
The histogram back-projection and the CamShift algorithms described above were both
implemented using the available functions in the computer vision library OpenCV [40].
To increase the robustness of the leader tracking algorithm, we added a filter prior to
the execution of the CamShift algorithm to remove the pixels outside of the colour range of
the leader. This additional step allows the histogram back-projection to obtain more
distinctive probability distribution images, allowing the CamShift algorithm to locate the
region of interest more reliably. The steps of the final leader tracking algorithm are
summarized below and illustrated in Figure 4.6:
1. Define the initial search window where the leader is located and generate the
normalized histogram of the leader.
2. Convert the image from the camera to HSV image (see Figure 4.6(b)).
3. Filter the image to remove pixels outside the colour range of the leader (see Figure
4.6(c)).
4. Compute the probability distribution image using the histogram back-projection with
the hue plane of the HSV image.
5. Calculate the region of interest of the HSV image where the ball is located with the
CamShift algorithm (see Figure 4.6(d)).
6. Update the initial search window to match the region of interest computed.
7. Update the image of the camera and go to step 2.
We implemented the leader tracking algorithm on the vision system using ROS and the
OpenCV library [40].
44
Chapter 4: Vision Algorithm
(a) Original image
(b) HSV image
(c) Filtered image
(d) CamShift algorithm output
Figure 4.6: Leader tracking algorithm
4.2.2 Position Estimation Algorithm
To estimate the range and bearing of the leader with respect to the MHT, we use the
perceived size of the ball and its position in the field of view of the camera. While the
CamShift algorithm can detect the area in the image where the leader is located, we observed
that the algorithm can lose a portion of the ball over long-term tracking (see Figure 4.11).
Therefore, we developed a separate solution to detect the shape of the leader and its position
in the image using the output of the leader tracking algorithm. The position estimation
algorithm designed is a combination of a simple colour-based tracking method and a contour
detection algorithm. The algorithm is summarized as follows with accompanying illustrations
in Figure 4.7:
45
Chapter 4: Vision Algorithm
1. Extract the HSV image located in the region of interest computed by the leader
tracking algorithm (see Figure 4.7(a)).
2. Identify the pixels in the colour range of the leader (see Figure 4.7(b)).
3. Detect the minimum circle enclosing the pixels detected (see Figure 4.7(c)).
4. Compute the size and the position of the leader in the image from the camera (see
Figure 4.7(d)).
5. Estimate the range and the bearing of the leader with respect to the MHT.
Similarly to the leader tracking algorithm, we implemented the position estimation algorithm
on the vision system using ROS and the OpenCV library [40].
(a) HSV image of the
(b) Pixels in the
(c) Minimum circle
leader
colour range of the
enclosing the leader
leader
(d) Size and position of the leader in the image
Figure 4.7: Position estimation algorithm
To estimate the range of the leader, we applied a function correlating the radius of the
ball in the images with its distance from the MHT. This function was obtained by measuring
the perceived radius of the leader in the image at multiple predefined distances from the
Toolkit. Then, using the software Eureqa from Nutonian Inc. [41], we were able to determine
46
Chapter 4: Vision Algorithm
a mathematical relationship between the perceived radius of the ball and its range with
respect to the MHT with minimum error.
The estimation of the bearing of the leader with respect to the Toolkit was achieved
using the horizontal position of the ball in the image from the camera and the pan angle
feedback of the PTU. First, the horizontal position of the ball in the image with respect to the
centre of the image is measured. The bearing of the ball with respect to the camera is then
obtained by correlating the horizontal position of the leader in the image frame with the
horizontal field of view of the camera (see Chapter 2). The PTU is controlled by a PID
control law implemented by O’Reilly [27] to ensure that the leader remains in the field of
view of the camera. The controller readjusts the pan and tilt angles of the camera to maintain
the leader in the centre of the image. Finally, the bearing of the leader with respect to the
MHT is calculated by combining the bearing of the ball with respect to the camera and the
pan angle feedback of the PTU.
4.3 Vision Algorithm Results
To validate the vision algorithm, we performed two sets of experiments to assess the
accuracy and precision of the range and bearing estimation. First, we placed the leader at
different distances from the camera and evaluated the range estimated by the algorithm. For
each distance, we recorded over a thousand measurements to gauge the precision of the
algorithm. The results of this experiment are displayed in Figure 4.8 and Table 4.1, which
includes the average estimated range and the standard deviation of the measurements at each
actual distance tested.
47
Chapter 4: Vision Algorithm
500
450
Estimated Range (cm)
400
350
300
250
200
150
100
50
0
0
50
100
150
200
250
Actual Range (cm)
300
350
400
Figure 4.8: Estimated range vs. actual range
Table 4.1: Vision algorithm range results
48
Actual Range
(cm)
50
Average Estimated
Range (cm)
49.53
Error
(cm)
0.47
Standard Deviation
(cm)
3.40
75
67.37
7.63
5.17
100
94.34
5.66
22.34
125
114.56
10.44
7.43
150
138.20
11.80
2.49
175
166.43
8.57
4.62
200
192.90
7.10
2.57
225
223.41
1.59
5.25
250
254.88
4.88
3.98
275
297.89
22.89
9.25
300
343.89
43.89
21.66
325
378.48
53.48
15.80
350
423.17
73.17
36.36
Chapter 4: Vision Algorithm
In the results shown in Figure 4.8 and Table 4.1, we observe that the range estimation
of the vision algorithm is satisfactory between the actual ranges of 50 cm and 250 cm. At
ranges below 250 cm, the average errors are lower than 12 cm and the standard deviation
remains below 8 cm. There is an exception at an actual range of 100 cm, where the standard
deviation reaches 22.34 cm. This occurred because the illumination conditions of the leader
varied at this particular distance, which affected the precision of the position estimation
algorithm. Once the leader moves beyond a range of 250 cm from the camera, the average
errors of the range estimation increase significantly, and the standard deviations rise as well.
The high estimation errors at long ranges are mainly due to the inability of the vision
algorithm to detect the correct contour of the ball in the images: the system underestimates
the size of the ball (see Figure 4.9). This causes the vision algorithm to measure a higher
range than the actual distance of the leader from the MHT.
Figure 4.9: Vision algorithm – long range tracking error
To test the bearing estimation of the vision system, we placed the ball at different
angles from the MHT and measured the bearing value computed by the vision algorithm.
Similarly to the range experiment, we recorded over a thousand measurements for each
bearing tested. The results of this experiment are displayed in Figure 4.10.
49
Chapter 4: Vision Algorithm
50
Estimated Bearing (°)
40
30
20
10
0
-10
-10
0
10
20
30
Actual Bearing (°)
40
50
Figure 4.10: Estimated bearing vs. actual bearing
The bearing estimates shown in Figure 4.10 are very close to the actual bearing of the leader.
The average bearing estimation error is approximately 1º and standard deviations are below
0.1º. These results show that the bearing estimation of the vision algorithm performs very
well. This is expected since the controller of the PTU rotates the camera so as to maintain the
ball at the centre of the field of view of the camera. Therefore, the bearing estimates are
mainly computed from the pan angle of the PTU, which is measured within an error range of
±1º and is very precise.
During the experiments, we observed that the performance of the vision algorithm
was affected by several elements in the environment of the MHT. First, since the vision
algorithm developed uses colour-based tracking algorithms, it is not robust to illumination
changes of the leader. While the MHT is tested in a laboratory with constant lighting
conditions, the variations of the point of view of the leader with respect to the camera can
create changes in the perceived illumination of the ball, affecting the performance of the
vision system. Secondly, we observed that the background of the leader influences the output
of the vision system. When the colour of the background was similar to that of the ball, the
vision algorithm detected a larger contour for the leader, which resulted in a lower range
50
Chapter 4: Vision Algorithm
estimate. This effect was minimized in the leader-follower experiments by removing objects
of similar colour to the ball from the background. Next, we noticed that partial occlusion of
the ball causes the position estimation algorithm to not detect the full size of the ball.
Therefore, we avoided any obstruction of the leader during the manoeuvres presented in
Chapter 5. Finally, we remarked that the leader tracking algorithm often failed to detect the
complete area of the ball after multiple leader-follower manoeuvres (see Figure 4.11). This
was expected since adaptive colour-based tracking algorithms can fail over long-term
tracking [31]. Therefore, it is necessary to reinitialize the vision algorithm when this error
occurs.
Figure 4.11: CamShift algorithm - long-term tracking error
The experiments with the vision system showed that the vision algorithm can be used
to evaluate the leader-follower controller in a well-conditioned environment. The vision
algorithm range estimation achieves satisfactory results between 50 cm and 250 cm, and the
bearing estimates are very accurate and precise. For future development of the leaderfollower controller on the MHT for unconditioned environments, we recommend
investigating alternative object tracking methods that are more robust to illumination
changes, such as feature points-based methods or fiducial-based methods.
51
Chapter 4: Vision Algorithm
4.4 Summary
In this chapter, we described the implementation of a vision system on the MHT to
track the leader using the IEEE 1394 Flea camera. First, we reviewed the different methods
to track an object and estimate its pose with a monocular camera. Then, we detailed the
vision algorithm developed to track the leader in the image of our vision system and to
estimate its range and bearing with respect to the Toolkit. Finally, we presented the
experimental results evaluating the performance of the vision algorithm. The vision system
performs adequately for the purpose of testing the leader-follower controller in a laboratory
environment. Nevertheless, we recommend investigating more advanced vision algorithms to
visually track the leader for future development of the MHT.
52
Chapter 5
Leader-Follower Manoeuvres
In this chapter, we present the leader-follower scenarios implemented on the MHT to
assess the performance of the leader-follower controller. To test the controller developed in
Chapter 3, we designed several leader-follower manoeuvres for testing on the robot. These
scenarios were implemented on the Toolkit in LMS Virtual.Lab Motion and on its physical
platform. In the majority of the leader-follower manoeuvres presented in this chapter, the
MHT is instructed to maintain a constant steering posture, defined by a chassis roll angle of
𝜙 = 0°, a chassis pitch angle of 𝜓 = 0°, a chassis height of 𝑧 = 0.409 𝑚, 𝑥-positions of the
wheels of 𝑥1 = 𝑥2 = 0.225 𝑚 and 𝑥3 = 𝑥4 = −0.225 𝑚, and 𝑧-positions of the wheels of
𝑧1 = 𝑧2 = 𝑧3 = 𝑧4 = −0.409 𝑚 (see Figure 5.1). This particular posture was selected to test
the leader-follower controller on the MHT since it allows the robot to turn at a reasonable
rate while maintaining a good stability margin [14].
In the following sections, we first detail the turning manoeuvres implemented on the
MHT in simulation in order to assess the steering capabilities of the robot as these are critical
to the leader-follower manoeuvering (see Section 5.1). Then, we describe the leader-follower
scenarios used to test the leader’s trajectory constraints derived in Chapter 3 that ensure the
MHT’s ability to achieve leader-follower formation control (see Section 5.2). Finally, we
evaluate the performance of the leader-follower controller by presenting the leader-follower
manoeuvres implemented in simulation and on the physical MHT platform (see Sections 5.3
and 5.4).
- 53 –
Chapter 5: Leader-Follower Manoeuvres
Figure 5.1: MHT steering posture
5.1 Turning Manoeuvres
To steer the MHT, the controller applies non-equal voltages to the left and right
wheels of the robot. In previous work [20], the maximum turning rates of the MHT were
measured in LMS Virtual.Lab Motion at different wheel separations by commanding
maximum opposite voltages (±10V) to the left and right wheels of the Toolkit. At the wheel
separation corresponding to the steering posture, the maximum turning rate obtained was
2.5º/s [20]. However, in the course of the present research, it was observed that this value is
only valid for the MHT when the posture controller is used in the non-stepping mode: the
robot loses stability in this mode beyond the aforementioned maximum rate. This instability
results as the posture controller tries to compensate for the roll oscillations of the chassis in
order to maintain the desired zero roll angle of the vehicle. Unlike the previous work, we
found that the turning, and thus the leader-follower manoeuvering, of the MHT yields
superior performance when executed in the stepping mode. In this mode, the posture
controller only maintains the desired 𝑥- and 𝑧-positions of the wheels with respect to the
chassis and effectively ignores the deviations in the pose of the chassis. Thus, the posture
controller does not react to the roll oscillations of the chassis during the manoeuvres. A
beneficial consequence of this observation is that the use of the stepping mode controller
allows the MHT to turn faster in simulation than with the non-stepping mode controller. To
determine the maximum turning rates for the stepping mode controller, we tested a range of
turning manoeuvres on the robot in simulation, as presented in this section.
54
Chapter 5: Leader-Follower Manoeuvres
First, we investigated the turning rates of the MHT in its steering posture at different
forward velocities. To do so, we applied unequal voltage inputs with a constant 4V difference
to the left and right wheels of the platform. The results of these manoeuvres are summarized
in Table 5.1, which includes the voltage inputs to the wheels, the average velocity of the
MHT and the turning rate of the Toolkit during each simulation. From Table 5.1, we can
observe that the turning rate of the MHT increases by 73% (from 2.6º/s to 4.5º/s) as the
platform reaches its maximum velocity with 4V differential voltage commands between the
left and the right wheels. This suggests that the skidding resistance of the wheels decreases as
the platform moves at a higher velocity, which allows the MHT to turn faster in simulation.
Table 5.1: Turning manoeuvres results with 4V differential voltages
Right Wheel
Voltage (V)
-2
0
2
4
6
Left Wheel
Voltage (V)
2
4
6
8
10
Velocity
(m/s)
0.05
0.24
0.48
0.73
0.98
Turning Rate
(º/s)
- 2.6
- 2.9
- 3.6
- 4.1
- 4.5
To further investigate the dependence of the turning rate of the MHT on its
translational velocity, we implemented an additional set of turning manoeuvres where we
commanded unequal voltages to the left and right wheels with a constant 10V difference. The
results of the second set of turning manoeuvres are summarized in Table 5.2, where we can
observe that the turning rate of the MHT increases by 22% (from 8.3º/s to 10.1º/s) as the
robot reaches its maximum velocity. This is significantly lower than the increase of the
turning rate in Table 5.1. However, we can also notice that the maximum velocity in the
second set of turning manoeuvres is lower than the maximum velocity of the first set of tests.
Therefore, we can deduce that in the simulation model of the Toolkit, the turning rate of the
robot for the same voltage differential increases with the forward velocity. Furthermore, the
turning rate of the MHT also increases as the voltage difference between the left and the right
wheels becomes bigger, as expected.
55
Chapter 5: Leader-Follower Manoeuvres
Table 5.2: Turning manoeuvres results with 10V differential voltages
Right Wheel
Voltage (V)
-5
-2.5
0
Left Wheel
Voltage (V)
5
7.5
10
Velocity
(m/s)
0.05
0.29
0.60
Turning Rate
(º/s)
- 8.3
- 9.0
- 10.1
Finally, we implemented a turning manoeuvre in LMS Virtual.Lab Motion where we
commanded maximum opposite voltages (±10V) to the left and right wheels of the MHT
while it is in the steering posture. In this simulation, we determined that the maximum
turning rate of the MHT in the steering posture with the posture controller in the stepping
mode is approximately 22.0º/s. This value is significantly higher than the maximum rate of
the Toolkit found by Türker et al. [20]. This is due to the ability of the MHT to maintain its
stability during its turning manoeuvres in the stepping mode simulations. However, in the
manoeuvre at maximum turning rate, we observed that, contrary to the two previous sets of
turning manoeuvres, the MHT was unable to maintain a constant turning rate for longer than
7 seconds at a time (see Figure 5.2(a)). This is caused by the oscillations of the chassis roll
angle as the MHT rotates at maximum turning rate, which results in the irregular motion of
the platform (see Figure 5.2(b)).
Turning Rate (°/s)
10
0
-10
-20
-30
0
5
10
15
20
25
Time (s)
(a) MHT turning rate vs. time
Roll Angle (°)
1
0.5
0
-0.5
-1
Desired Roll
Actual Roll
0
5
10
15
20
25
Time (s)
(b) Chassis roll vs. time
Figure 5.2: Turning manoeuvre with 20V differential voltages – results
56
Chapter 5: Leader-Follower Manoeuvres
5.2 Simulation of Leader’s Trajectory Constraints
In Chapter 3, we developed a set of trajectory constraints for the leader to ensure that
the MHT can achieve the desired leader-follower formation. To test the trajectory constraints,
several leader-follower scenarios were implemented on the robot in LMS Virtual.Lab
Motion. In these simulations, the MHT was instructed to maintain the steering posture during
the leader-follower manoeuvre. The steering posture was attained by executing a posture
reconfiguration manoeuvre with the Toolkit stationary during the first 5 seconds of each
simulation. As mentioned earlier, in each leader-follower scenario, the posture controller was
applied in the stepping mode. In this mode, the maximum velocity (𝑉𝐹 ) and turning rate (Ω𝐹 )
of the MHT in the steering posture are 1.4 m/s and 22º/s respectively. Thus, the leader’s
trajectory constraints in scenarios presented in this section are:
−
1
1
< 𝑘𝐿 <
𝑟𝑑 cos(𝛾𝑑 )
𝑟𝑑 cos(𝛾𝑑 )
(5.1)
−22 °/𝑠 < 𝜔𝐿 < 22 °/𝑠
(5.2)
𝑣𝐿 < 1.4 cos(𝛾𝑑 ) (𝑚/𝑠)
(5.3)
where 𝑟𝑑 is the desired range, 𝛾𝑑 is the desired bearing, 𝑘𝐿 defines the path curvature of the
leader, 𝜔𝐿 is the turning rate of the leader and 𝑣𝐿 is the velocity of the leader. To demonstrate
the applications of the leader’s trajectory constraints, we implemented three leader-follower
scenarios in simulation, defined in Table 5.3.
.
57
Chapter 5: Leader-Follower Manoeuvres
Table 5.3: Leader-follower simulation scenarios for leader's trajectory constraints
Name
Leader’s
Trajectory
Constraint
Scenario 1
Description
𝑟𝑑 = 1 𝑚 and γ𝑑 = 0°
Leader initially 1 m in front of
MHT.
At 5 seconds, leader moves
along circular trajectory of 2.5
m radius at constant velocity
and turning rate of 1 m/s and
23º/s.
Constraint (3.31) is not
respected.
Leader’s Trajectory
4
3
2
Y Position (m)
#
1
Leader Trajectory
1
MHT Initial Position
0
-1
-2
-1
0
1
2
3
X Position (m)
4
5
Figure 5.3: Leader-follower scenario 1 –
setup
Leader’s
Trajectory
Constraint
Scenario 2
𝑟𝑑 = 1 𝑚 and γ𝑑 = 60°
Leader initially at desired range
and bearing from MHT.
At 5 seconds, leader moves
along linear trajectory parallel to
initial orientation of Toolkit at
constant velocity of 0.9 m/s.
Constraint (3.34) is not
respected.
6
5
4
Y Position (m)
2
3
Leader Trajectory
2
1
MHT Initial Position
0
-1
-2
-6
-4
-2
0
X Position (m)
2
4
6
Figure 5.4: Leader-follower scenario 2 –
setup
Leader`s
Trajectory
Constraint
Scenario 3
𝑟𝑑 = 1 𝑚 and γ𝑑 = 0°
Leader initially 1 m in front of
MHT.
At 5 seconds, leader moves
along circular trajectory of 40 m
radius at constant velocity and
turning rate of 1.1 m/s and
1.4º/s.
All the leader`s trajectory
constraints are respected.
50
40
30
20
Y Position (m)
3
Leader Trajectory
10
MHT Initial Position
0
-10
-20
-30
-40
-50
0
20
40
60
X Position (m)
80
Figure 5.5: Leader-follower scenario 3 –
setup
58
Chapter 5: Leader-Follower Manoeuvres
For each leader-follower scenario in Table 5.3, we present the following plots:
a) The leader and the MHT’s trajectories
b) Range of the leader with respect to the MHT vs. time
c) Bearing of the leader with respect to the MHT vs. time
The results of the simulations are shown in Figures 5.6 to 5.8 for the Scenarios 1 to 3,
respectively.
In Scenario 1, the leader`s turning rate was greater than the maximum turning rate of
the MHT in the steering posture. This causes the leader to violate the constraint (3.31).
Therefore, the MHT should be unable to reach the desired range and bearing with respect to
the leader. In Figure 5.6, we can observe that the range and bearing errors increase in the first
12 seconds of the leader-follower behaviour, with the range reaching 5.6 m and the bearing
reaching −72º at 17 seconds. This initial increase of the positioning errors of the MHT is
caused by the inability of the robot to follow the leader as it moves along the circular
trajectory, as expected for this scenario. At 18 seconds, the bearing error stabilizes at −72º
and the range error starts to decrease. At this time, the leader begins to move in the opposite
direction of the MHT, producing the aforementioned range and bearing response. This also
results in the difference between the orientation of the leader (𝜃𝐿 ) and the follower (𝜃𝐹 ) to be
greater than 90º, violating the constraint (3.26), which defines the condition for which the
leader’s trajectory constraints are valid. Based on the above responses, we can conclude that
the MHT is unable to achieve the leader-follower formation in this scenario, as expected.
59
Chapter 5: Leader-Follower Manoeuvres
6
Leader-Follower Begins
5
4
Range (m)
Leader Trajectory
MHT Trajectory
4
2
Actual Range
Desired Range
0
2
0
5
10
15
20
25
Time (s)
(b) Range vs. time
1
50
Leader-Follower Begins
0
Bearing (°)
Y Position (m)
3
-1
-2
-1
0
1
2
3
X Position (m)
4
(a) Leader and MHT trajectories
5
Actual Bearing
Desired Bearing
0
-50
6
-100
0
5
10
15
20
Time (s)
(c) Bearing vs. time
Figure 5.6: Leader-follower scenario 1 – results
In Scenario 2, the leader moves at a constant velocity higher than the limit defined by
the constraint (3.34). Thus, we do not expect the MHT to be able to reach the desired leaderfollower formation. In Figure 5.7, we can observe that the range and bearing errors increase
rapidly in the first 4 seconds of the leader-follower behaviour, registering errors of 0.5 m and
19º respectively. However, the positioning errors of the Toolkit then steadily decrease until
the end of the simulation, and the range and bearing of the leader both tend towards their
desired values, albeit after 300 seconds. These responses suggest that the MHT is able to
achieve the desired leader-follower formation even if the constraint (3.34) is not respected.
This can be explained by the fact that the constraint (3.34) is more conservative than the
actual velocity constraint derived in Chapter 3, given by Equation (3.33) which is in fact
respected in this scenario. Therefore, the MHT is able to achieve leader-follower formation
control.
60
25
Chapter 5: Leader-Follower Manoeuvres
1.5
Range (m)
350
300
1
0.5
Actual Range
Desired Range
Leader-Follower Begins
0
0
50
100
200
150
150
200
Time (s)
250
300
350
(b) Range vs. time
100
70
Leader-Follower Begins
50
0
-50
-1
Bearing (°)
Y Position (m)
250
Leader Trajectory
Follower Trajectory
-0.8
-0.6
-0.4
-0.2
0
0.2
X Position (m)
0.4
0.6
(a) Leader and MHT trajectories
0.8
60
50
Actual Bearing
Desired Bearing
1
40
0
50
100
150
200
Time (s)
250
300
350
(c) Bearing vs. time
Figure 5.7: Leader-follower scenario 2 – results
In Scenario 3, the leader respects all its trajectory constraints; therefore, we expect the
MHT to achieve the desired range and bearing with respect to the leader. In Figure 5.8(a), we
can observe that the trajectory of the Toolkit almost completely overlaps with the trajectory
of the leader. The range error increases rapidly at the beginning of the leader-follower
behaviour, but then decreases for the remaining time of the simulation, tending towards the
desired range (see Figure 5.8(b)). In Figure 5.8(c), we observe that the bearing error remains
relatively small during the manoeuvre and the MHT reaches the desired bearing at 80
seconds. These results indicate that the MHT is able to reach the desired leader-follower
formation when the leader’s trajectory constraints are all respected.
61
Chapter 5: Leader-Follower Manoeuvres
2
50
Leader-Follower Begins
Range (m)
1.5
40
30
1
0.5
Actual Range
Desired Range
20
0
50
0
100
150
Time (s)
200
250
300
(b) Range vs. time
-10
5
Leader-Follower Begins
-20
-30
-40
-50
-10
Bearing (°)
Y Position (m)
0
10
Leader Trajectory
Follower Trajectory
0
10
20
30
40
50
X Position (m)
60
70
80
0
-5
Actual Bearing
Desired Bearing
90
-10
(a) Leader and MHT trajectories
0
50
100
150
Time (s)
200
250
(c) Bearing vs. time
Figure 5.8: Leader-follower scenario 3 – results
5.3 Leader-Follower Manoeuvres on a Flat Terrain
Having tested the leader-follower controller and leader’s trajectory constraints in
simulation, we now present a comprehensive set of results to further demonstrate the
performance of the controller in simulation and on the real MHT vehicle. In the scenarios
presented in this section, both the leader and the Toolkit travel on a flat terrain. The leader
moves along different types of trajectories and the robot is required to achieve the desired
leader-follower formation. In each test, the MHT was instructed to maintain the steering
posture during all the leader-follower manoeuvres. To reach the steering posture, the robot
remained stationary at the beginning of each scenario to complete its posture reconfiguration
manoeuvre.
62
300
Chapter 5: Leader-Follower Manoeuvres
5.3.1 Simulation Results
We begin with simulated leader-follower scenarios in LMS Virtual.Lab Motion. As in
the previous simulations of Section 5.2, the MHT executes its posture reconfiguration
manoeuvre in the first 5 seconds of each simulation to achieve the steering posture. We noted
that during the posture reconfiguration manoeuvres, the Toolkit ends at a yaw angle different
from its initial orientation, which affects the bearing of the leader prior to the start of the
leader-follower behaviour. To correct this error, we modified the leader’s trajectory to ensure
that the MHT starts the leader-follower manoeuvre at the desired initial bearing. Contrary to
the leader’s trajectory constraints scenarios, in the first simulation presented in this section,
the posture controller is used in the non-stepping mode since the MHT was able to maintain
its stability during this particular manoeuvre.
We developed five leader-follower scenarios to test the leader-follower controller in
LMS Virtual.Lab Motion. For all the simulations, the MHT is required to reach and maintain
a desired range and bearing of 1 m and 0º with respect to the leader. The scenarios presented
in this section are detailed in Table 5.4.
Table 5.4: Leader-follower simulation scenarios on a flat terrain
Name
Static Leader
Simulation 1
Description
Leader is 3.5 m in front of MHT
and does not move.
After 5 seconds for posture
reconfiguration, MHT moves
towards leader.
Posture controller in nonstepping mode.
All leader’s trajectory
constraints respected.
Leader’s Trajectory
5
4
Leader Position
3
Y Position (m)
#
4
2
1
MHT Initial Position
0
-1
-3
-2
-1
0
1
X Position (m)
2
3
Figure 5.9: Leader-follower scenario 4 –
setup
63
Chapter 5: Leader-Follower Manoeuvres
Name
Static Leader
Simulation 2
Description
Leader initially at range and
bearing of 3.5 m and −32º from
MHT and does not move.
After 5 seconds for posture
reconfiguration, MHT moves
towards leader.
Posture controller in stepping
mode.
All leader’s trajectory
constraints respected.
Leader’s Trajectory
5
4
Leader Position
3
Y Position (m)
#
5
2
1
MHT Initial Position
0
-1
-3
-2
-1
0
1
X Position (m)
2
3
Figure 5.10: Leader-follower scenario 5 –
setup
QuarterCircular
Trajectory
Simulation
Leader initially 1 m in front of
MHT.
At 5 seconds, leader moves
along quarter circular-trajectory
of 3 m radius at constant
velocity and turning rate of
0.2618 m/s and 5º/s.
At 23 seconds, leader stops and
remains stationary for rest of
simulation.
Posture controller in stepping
mode.
All leader’s trajectory
constraints respected.
5
4
3
Y Position (m)
6
Leader Trajectory
2
1
MHT Initial Position
0
-1
-1
0
1
2
3
X Position (m)
4
5
Figure 5.11: Leader-follower scenario 6 –
setup
64
Chapter 5: Leader-Follower Manoeuvres
8
Perpendicular
Trajectory
Simulation
Description
Leader initially 1 m in front of
MHT.
At 5 seconds, leader moves
forwards at constant velocity of
0.4 m/s.
At 10 seconds, leader abruptly
turns 90º to the right and
continues to move forwards at
0.33 m/s.
At 22 seconds, leader stops and
remains stationary for rest of the
simulation.
Posture controller in stepping
mode.
Constraints (3.29) and (3.31) are
not respected.
Leader initially 1 m in front of
MHT.
At 5 seconds, leader moves
along linear trajectory
perpendicular to initial
orientation of MHT at constant
velocity of 0.3 m/s.
At 15 seconds, leader stops and
remains stationary for rest of the
simulation.
Posture controller in stepping
mode.
Constraint (3.26) is not
respected.
Leader’s Trajectory
5
4
3
Y Position (m)
Name
Corner
Trajectory
Simulation
Leader Trajectory
2
1
MHT Initial Position
0
-1
-1
0
1
2
3
X Position (m)
4
5
Figure 5.12: Leader-follower scenario 7 –
setup
4
3.5
3
2.5
Y Position (m)
#
7
2
1.5
Leader Trajectory
1
0.5
MHT Initial Position
0
-0.5
-1
-1
0
1
2
X Position (m)
3
4
Figure 5.13: Leader-follower scenario 8 –
setup
The results of Scenarios 4 to 8 are shown in Figures 5.14 to 5.18, respectively, each figure
showing the trajectories of the leader and of the MHT, and the range and bearing responses.
65
Chapter 5: Leader-Follower Manoeuvres
4
5
Actual Range
Desired Range
Range (m)
3
4
2
1
Y Position (m)
3
0
Leader-Follower Begins
10
20
30
40
0
2
50
60
Time (s)
70
80
90
100
(b) Range vs. time
10
1
Leader-Follower Begins
Bearing (°)
0
0
Leader Trajectory
Follower Trajectory
-1
-3
-2
-1
0
X Position (m)
1
2
-10
-20
3
-30
(a) Leader and MHT trajectories
Actual Bearing
Desired Bearing
0
10
20
30
40
50
60
Time (s)
70
80
90
100
(c) Bearing vs. time
Figure 5.14: Leader-follower scenario 4 – results
4
5
Actual Range
Desired Range
Range (m)
3
4
2
1
0
0
5
Leader-Follower Begins
10
15
20
Time (s)
2
25
30
35
(b) Range vs. time
1
10
Leader-Follower Begins
0
0
Bearing (°)
Y Position (m)
3
Leader Trajectory
Follower Trajectory
-1
-3
-2
-1
0
X Position (m)
1
2
(a) Leader and MHT trajectories
3
-10
-20
-30
-40
Actual Bearing
Desired Bearing
-50
0
5
10
15
20
Time (s)
(c) Bearing vs. time
Figure 5.15: Leader-follower scenario 5 – results
66
25
30
35
Chapter 5: Leader-Follower Manoeuvres
Leader-Follower Begins
1.5
Range (m)
5
4
Y Position (m)
3
1
0.5
0
Actual Range
Desired Range
0
5
10
2
15
20
25
30
Time (s)
35
40
45
50
(b) Range vs. time
10
1
Leader-Follower Begins
Bearing (°)
0
0
Leader Trajectory
Follower Trajectory
-1
-1
0
1
2
X Position (m)
3
4
-10
-20
Actual Bearing
Desired Bearing
-30
5
-40
(a) Leader and MHT trajectories
0
5
10
15
20
25
30
Time (s)
35
40
45
50
(c) Bearing vs. time
Figure 5.16: Leader-follower scenario 6 – results
2.5
Leader-Follower Begins
2
Range (m)
5
4
1.5
1
Actual Range
Desired Range
0.5
0
0
5
2
10
15
20
Time (s)
25
30
35
40
(b) Range vs. time
20
1
Leader-Follower Begins
0
Bearing (°)
Y Position (m)
3
0
Leader Trajectory
Follower Trajectory
-1
-1
0
1
2
X Position (m)
3
4
5
-20
-40
-60
(a) Leader and MHT trajectories
Actual Bearing
Desired Bearing
0
5
10
15
20
Time (s)
25
30
35
(c) Bearing vs. time
Figure 5.17: Leader-follower scenario 7 – results
67
40
Chapter 5: Leader-Follower Manoeuvres
2
4
Range (m)
1.5
3.5
3
1
0.5
Actual Range
Desired Range
0
2
0
Leader-Follower Begins
5
10
15
20
Time (s)
1.5
20
0.5
30
-0.5
Leader Trajectory
Follower Trajectory
Leader-Follower Begins
-0.5
0
0.5
1
1.5
2
X Position (m)
2.5
3
3.5
4
-20
-40
-60
(a) Leader and MHT trajectories
Actual Bearing
Desired Bearing
0
5
10
15
20
Time (s)
25
30
(c) Bearing vs. time
Figure 5.18: Leader-follower scenario 8 – results
Starting with Scenario 4, as already noted, the bearing of the leader drifts from its
initial value by approximately 8º prior to the beginning of the leader-follower behaviour (see
Figure 5.14(c)). In Scenarios 5 to 8, we observe that the drift of the bearing is approximately
3º (see Figures 5.15(c), 5.16(c), 5.17(c) and 5.18(c)). The reduced drift in the last
manoeuvres is due to the execution of the posture controller in the stepping mode instead of
the non-stepping mode.
In Scenario 4, once the leader-follower behaviour begins, the MHT rapidly reaches
the desired range 4 seconds into the manoeuvre (see Figure 5.14(b)). However, after the
robot reaches the desired range, the bearing error increases for 31 seconds, peaking at 23º at
35 seconds into the leader-follower manoeuvre (see Figure 5.14(c)). This behaviour is caused
by the response of the posture controller of the Toolkit to maintain the steering posture,
which causes the robot to turn. As the integral term of the turning rate PID control law
increases, the bearing error stabilizes. Then, the bearing error decreases steadily and the
MHT reaches the desired bearing at 95 seconds into the manoeuvre. This long response time
is also due to the low maximum angular velocity of the Toolkit when the posture controller is
in the non-stepping mode (2.5º/s).
68
35
0
0
-1
-1
25
(b) Range vs. time
1
Bearing (°)
Y Position (m)
2.5
35
Chapter 5: Leader-Follower Manoeuvres
In Scenario 5, the range error decreases rapidly at the start of the leader-follower
behaviour (see Figure 5.15(b)). At 3 seconds into the manoeuvre, the range begins to
stabilize and reaches its desired value at 20 seconds into the leader-follower manoeuvre. The
bearing error initially increases as the robot moves forwards, peaking at 51º at 3 seconds into
the manoeuvre (see Figure 5.15(c)). Afterwards, the bearing error starts to decrease, reaching
its desired value at 25 seconds into the leader-follower manoeuvre.
In Scenario 6, we notice that the range and bearing errors initially increase as the
leader moves along the quarter-circular trajectory (see Figures 5.16(b) and (c)). The range
and bearing errors peak when the leader stops at 18 seconds into the manoeuvre, reaching
0.46 m and 37º respectively. Then, the range and bearing errors decrease steadily as the MHT
achieves the desired leader-follower formation 37 seconds after the beginning of the leaderfollower behaviour.
In the results of Scenario 7, we can observe two slope discontinuities at 5 seconds and
17 seconds into the leader-follower manoeuvre (see Figures 5.17(b) and (c)). The first
discontinuity occurs when the leader abruptly turns 90º. Afterwards, the range and bearing
errors increase, reaching 0.77 m and 44º at 13 seconds into the manoeuvre. This is due to the
leader moving perpendicularly to the initial orientation of the MHT. As the robot adjusts its
orientation towards the leader’s trajectory, the leader continues to move forwards, increasing
the positioning errors of the Toolkit. This behaviour is expected since the leader violates the
constraints (3.29) and (3.31) in this scenario. Therefore, we did not expect the MHT to be
able to maintain the leader-follower formation and indeed, the range and bearing errors of the
MHT clearly increase after the leader turns. As the robot manages to turn towards the leader
at 13 seconds into the manoeuvre, the range and bearing errors start to decrease. The second
slope discontinuity occurs when the leader stops at 17 seconds into the leader-follower
manoeuvre. Then, the range and bearing errors steadily decrease, reaching their desired
values at 34 seconds into the leader-follower manoeuvre.
In Scenario 8, the range and bearing errors increase rapidly at the beginning of the
leader-follower behaviour (see Figures 5.18(b) and (c)). This is due to the leader initially
moving perpendicularly to the starting orientation of the MHT, which violates the constraint
(3.26). As the Toolkit rotates towards the leader’s direction, the leader continues to move
forwards, increasing the positioning errors of the robot. Once the MHT manages to turn in
69
Chapter 5: Leader-Follower Manoeuvres
the direction of the leader at 8 seconds into the manoeuvre, the range and bearing errors
stabilize at 0.7 m and 43º respectively. At 10 seconds into the manoeuvre, we notice a slope
discontinuity in Figures 5.18(b) and (c). This discontinuity occurs when the leader stops. The
MHT then moves towards its desired position with respect to the leader, reaching the desired
range and bearing at 29 seconds into the manoeuvre. In terms of performance, we can remark
that the results of Scenario 8 are similar to Scenario 7.
5.3.2 Experimental Results
The ultimate assessment of the leader-follower controller’s performance is on the
physical platform of the MHT. In the experimental scenarios, the MHT measures the range
and bearing of the leader using the vision algorithm developed in Chapter 4. To avoid having
the leader-follower reacting to small oscillations in the range and bearing measurements from
the vision system, we added dead-zones in the positioning errors computation stage of the
controller. In all the experimental manoeuvres, the Toolkit is connected to the control station
through an umbilical cord to power the systems and to upload the leader-follower controller
on the robot (see Figure 5.19). As in the simulations, the MHT started each experimental
manoeuvre in the home posture and reconfigured to the steering posture over 7 seconds 2. In
the steering posture, the maximum velocity and turning rate of the physical MHT platform
are 0.44 m/s and 18º/s, respectively. These values were determined by Wong [14] by
applying the maximum voltage commands to the wheels (10V) to measure the maximum
forward velocity of the robot, and the maximum differential voltage commands (±10V) to
obtain the maximum angular velocity of the MHT in the steering posture. We can remark that
maximum forward velocities of the MHT in experiment and simulation (0.44 m/s and 1.4 m/s
respectively) are significantly dissimilar, but that the difference between the maximum
turning rates is marginal (18º/s in the experiments and 22º/s in simulation). This indicates
that the skidding resistance of the wheels is more substantial in the LMS model than on the
physical MHT. As it will be seen in the results, this allows the Toolkit to reach high turning
rates faster in the experiments than in the simulation scenarios.
2
We increased the posture reconfiguration time on the platform compared to the simulations to ensure a smooth
reconfiguration manoeuvre of the platform.
70
Chapter 5: Leader-Follower Manoeuvres
Connection to Control Station
Micro-Hydraulic Toolkit
Leader
Figure 5.19: Leader-follower manoeuvres experimental setup
We executed the same scenarios in Section 5.3.1 and, to the extent possible,
attempted to match the conditions of the simulations. As in the simulations, the MHT is
instructed to achieve a desired range and bearing of 1 m and 0º with respect to the leader in
all scenarios. In all the experiments, the posture controller is in the non-stepping mode as the
MHT was able to maintain its stability during the turning manoeuvres. The scenarios
implemented on the physical platform of the MHT are detailed in Table 5.5.
Table 5.5: Leader-follower experimental scenarios on a flat surface
Name
Static Leader
Experiment 1
Description
Leader positioned about 3.5 m
in front of MHT and remains
stationary.
After 7 seconds for posture
reconfiguration, MHT moves
towards leader.
Scenario similar to simulation
Scenario 4.
Leader’s Trajectory
5
4
Leader Position
3
Y Position (m)
#
9
2
1
MHT Initial Position
0
-1
-3
-2
-1
0
1
X Position (m)
2
3
Figure 5.20: Leader-follower scenario 9 –
setup
71
Chapter 5: Leader-Follower Manoeuvres
Name
Static Leader
Experiment 2
Description
Leader positioned at initial
range and bearing of about 3.5
m and −32º from MHT and
remains stationary.
After 7 seconds for posture
reconfiguration, MHT moves
towards leader.
Scenario similar to simulation
Scenario 5.
Leader’s Trajectory
5
4
Leader Position
3
Y Position (m)
#
10
2
1
MHT Initial Position
0
-1
-1
0
1
2
3
X Position (m)
4
5
Figure 5.21: Leader-follower scenario 10
– setup
QuarterCircular
Trajectory
Experiment
Leader initially about 1 m in
front of MHT.
At 7 seconds, leader moves
along quarter-circular trajectory
of 3 m radius for 18 seconds.
At 25 seconds, leader stops and
remains stationary until end of
experiment.
Scenario similar to simulation
Scenario 6.
5
4
3
Y Position (m)
11
Leader Trajectory
2
1
MHT Initial Position
0
-1
-1
0
1
2
3
X Position (m)
4
5
Figure 5.22: Leader-follower scenario 11
– setup
72
Chapter 5: Leader-Follower Manoeuvres
Name
Corner
Trajectory
Experiment
Description
Leader initially about 1 m in
front of MHT.
At 7 seconds, leader moves 2 m
forwards for 5 seconds.
At 12 seconds, leader abruptly
changes direction 90º to the
right and continues to move
forwards 4 m for 12 seconds.
At 24 seconds, leader stops and
remains stationary until end of
experiment.
Scenario similar to simulation
Scenario 7.
Leader’s Trajectory
5
4
3
Y Position (m)
#
12
2
Leader Trajectory
1
MHT Initial Position
0
-1
-1
0
1
2
3
X Position (m)
4
5
Figure 5.23: Leader-follower scenario 12
– setup
Perpendicular
Trajectory
Experiment
Leader initially about 1 m in
front of MHT.
At 7 seconds, leader moves
along linear trajectory
perpendicular to MHT’s initial
orientation for 3 m for 10
seconds.
At 17 seconds, leader stops and
remains stationary until end of
experiment.
Scenario similar to simulation
Scenario 8.
4
3.5
3
2.5
Y Position (m)
13
2
1.5
Leader Trajectory
1
0.5
MHT Initial Position
0
-0.5
-1
-1
0
1
2
X Position (m)
3
4
Figure 5.24: Leader-follower scenario 13
– setup
For each leader-follower scenario implemented on the physical MHT platform, we present
the following plots:
a) Range of the leader with respect to the MHT vs. time
b) Bearing of the leader with respect to the MHT vs. time
The results of Scenarios 9 to 13 are shown in Figures 5.25 to 5.29, respectively.
73
Chapter 5: Leader-Follower Manoeuvres
4
Actual Range
Desired Range
Range (m)
3
2
1
0
0
Leader-Follower Begins
10
15
Time (s)
5
20
25
30
(a) Range vs. time
10
Bearing (°)
0
-10
-20
-30
0
Actual Bearing
Desired Bearing
Leader-Follower Begins
10
15
Time (s)
5
20
25
30
(b) Bearing vs. time
Figure 5.25: Leader-follower scenario 9 – experimental results
Range (m)
6
Leader-Follower Begins
Actual Range
Desired Range
4
2
0
0
5
10
15
20
Time (s)
25
30
35
40
(a) Range vs. time
Bearing (°)
20
Leader-Follower Begins
0
-20
-40
Actual Bearing
Desired Bearing
0
5
10
15
20
Time (s)
25
30
35
40
(b) Bearing vs. time
Figure 5.26: Leader-follower scenario 10 – experimental results
74
Chapter 5: Leader-Follower Manoeuvres
2
Actual Range
Desired Range
Range (m)
1.5
1
0.5
0
0
Leader-Follower Begins
10
15
20
25
Time (s)
5
30
35
40
45
(a) Range vs. time
10
Leader-Follower Begins
Bearing (°)
0
-10
-20
-30
Actual Bearing
Desired Bearing
0
5
10
15
20
25
Time (s)
30
35
40
45
(b) Bearing vs. time
Figure 5.27: Leader-follower scenario 11 – experimental results
2.5
Range (m)
2
1.5
1
Actual Range
Desired Range
0.5
0
0
5
Leader-Follower Begins
10
15
20
Time (s)
25
30
35
(a) Range vs. time
Bearing (°)
20
Leader-Follower Begins
0
-20
-40
Actual Bearing
Desired Bearing
0
5
10
15
20
25
30
35
Time (s)
(b) Bearing vs. time
Figure 5.28: Leader-follower scenario 12 – experimental results
75
Chapter 5: Leader-Follower Manoeuvres
2
Actual Range
Desired Range
Range (m)
1.5
1
0.5
0
0
5
Leader-Follower Begins
10
15
20
25
Time (s)
30
35
40
45
(a) Range vs. time
Bearing (°)
0
-10
-20
-30
-40
0
5
Leader-Follower Begins
10
15
20
25
Time (s)
Actual Bearing
Desired Bearing
30
35
40
45
(b) Bearing vs. time
Figure 5.29: Leader-follower scenario 13 – experimental results
In Scenario 9, we observe that the MHT reaches the desired leader-follower
formation faster than in the simulation Scenario 4 (see Figures 5.14 and 5.25). This result
was expected since the physical platform is capable of higher turning rate than what the
MHT exhibits in LMS Virtual.Lab Motion in the non-stepping mode.
In Scenarios 10 to 13, we remark that the MHT also achieves the desired leaderfollower formation faster than in the simulated scenarios (compare for example Figures 5.16
and 5.27). This is attributed to two factors: the higher skidding resistance of the wheels in
LMS Virtual.Lab Motion prevented the robot from reaching high turning rates in simulation,
and the gains of the leader-follower controller were optimized for the physical MHT.
In Scenario 11, the range and bearing errors of the robot increase at the beginning of
the leader-follower manoeuvre (see Figure 5.27). At 5 seconds into the manoeuvre, the range
error stabilizes at 0.4 m as the MHT moves towards the leader. At 10 seconds into the leaderfollower behaviour, the bearing error also stabilizes at approximately 20º. Once the leader
stops at 18 seconds into the experiment, the range and bearing errors decrease, reaching the
desired range and bearing at 20 seconds and 24 seconds into the manoeuvre respectively.
In Scenarios 12 and 13, we can remark that the bearing error starts to increase at 14
seconds in the corner trajectory experiment, and at 7 seconds in the perpendicular trajectory
76
Chapter 5: Leader-Follower Manoeuvres
experiment (see Figures 5.28(b) and 5.29(b)). The range error also rises at 9 seconds in the
manoeuvre for Scenario 12, and at the beginning of the leader-follower behaviour in Scenario
13 (see Figures 5.28 (a) and 5.29(a)). These results are similar to the results of Scenarios 7
and 8 (see Figures 5.17 and 5.18). They are caused by the inability of the MHT to follow the
leader as it moves perpendicularly to the orientation of the robot. Once the Toolkit
successfully manages to rotate in the leader’s direction, the robot is able to move towards the
leader, thus stabilizing its positioning errors at about 14 seconds into the leader-follower
manoeuvre in Scenario 12, and 15 seconds for Scenario 13. After the leader stops, the MHT
achieves the desired leader-follower formation at 23 seconds into the manoeuvre for Scenario
12 and 16 seconds for Scenario 13.
Finally, for all the experimental scenarios, we can notice that the MHT does not reach
the exact desired range and bearing with respect to the leader. This is due to the
implementation of the dead-zones in the leader-follower controller to compensate for
inaccuracies in the range and bearing estimates of the vision algorithm.
5.4 Leader-Follower Manoeuvre on a Ramp
One of the main motivations behind the development of the controller for the MHT
was to allow the robot to achieve leader-follower behaviour and posture control
simultaneously in order to navigate on uneven terrains. To test the ability of the MHT to
reconfigure its posture while following a leader, we developed a leader-follower manoeuvre
where the robot was required to negotiate a 10º ramp to reach the desired leader-follower
formation. The ramp employed is a 1 m wide platform of 1.42 m length and 0.25 m height
which ends with a flat surface (see Figure 5.32). In the leader-follower scenario presented,
the Toolkit was instructed to reach a desired range and bearing of 1 m and 0º with respect to
the leader. To increase the stability of the MHT, we commanded the robot to maintain the
homing posture during the experiment, defined by a chassis roll angle of 𝜙 = 0°, a chassis
pitch angle of 𝜓 = 0°, a chassis height of 𝑧 = 0.409 𝑚 and wheel 𝑥-positions of 𝑥1 = 𝑥2 =
0.465 𝑚 and 𝑥3 = 𝑥4 = −0.465 𝑚. We also forced the desired turning rate of the leaderfollower controller to zero to prevent the Toolkit from turning while negotiating the ramp due
to the limited width of the ramp. Thus, we do not expect the vehicle to reach the desired
77
Chapter 5: Leader-Follower Manoeuvres
bearing. In this scenario, the leader is positioned on top of the ramp at approximately 3.5 m
from the MHT and remains stationary. The MHT is required to move up the ramp while
maintaining the homing posture and to reach the desired range with respect to the leader.
Figure 5.30 displays the performance of the leader-follower controller. We can
remark that the Toolkit was able to reach and maintain its desired range but, as expected, was
unable to correct its bearing error. Figure 5.31 displays the posture parameters of the MHT
during the experiments from which we observe that the posture controller managed to
maintain the home posture properly as all the posture parameters remained close to their
desired values. In Figure 5.32, snapshots of the MHT as it negotiates the 10º ramp are
displayed. Initially, the MHT is in the homing posture (see Figure 5.32(a)). As the robot
moves up the ramp, the posture controller readjusts the heights of the legs with respect to the
chassis to maintain the desired chassis pose (see Figure 5.32(b)). Once the MHT reaches the
top of the ramp, it returns to the homing posture (see Figure 5.32(c)).
4
Actual Range
Desired Range
Range (m)
3
2
1
0
0
5
10
15
20
Time (s)
25
30
35
40
(a) Range vs. time
Bearing (°)
0
-10
-20
-30
Actual Bearing
Desired Bearing
0
5
10
15
20
time (s)
25
30
35
40
(b) Bearing vs. time
Figure 5.30: Leader-follower manoeuvre on a ramp - leader tracking results
78
Chapter 5: Leader-Follower Manoeuvres
Pitch and Roll vs. Time
4
Pitcha
Angle (°)
2
Rolla
Rolld
0
Pitchd
-2
-4
0
5
10
15
20
Time (s)
25
30
35
40
(a) Chassis pitch and roll angles vs. time
0.425
Zd
Position (m)
0.42
Za
0.415
0.41
0.405
0.4
0
5
10
15
20
Time (s)
25
30
35
40
(b) Chassis height vs. time
X1d
0.47
X1a
0.46
X2d
0.45
X2a
0
10
20
Time (s)
30
40
(c) Front wheel x-positions vs. time
-0.4
Position (m)
Position (m)
0.48
X3d
X3a
-0.45
-0.5
X4d
X4a
0
10
20
Time (s)
30
40
(d) Rear wheel x-positions vs. time
Figure 5.31: Leader-follower manoeuvre on a ramp – posture tracking results
79
Chapter 5: Leader-Follower Manoeuvres
10° Ramp
Flat Platform
Supporting Gantry
Leader
(a) 𝑡 = 0 𝑠
(b) 𝑡 = 7 𝑠
(c) 𝑡 = 15 𝑠
Figure 5.32: Snapshots of leader-follower manoeuvre on a ramp
80
Chapter 5: Leader-Follower Manoeuvres
5.5 Summary
In this chapter, we tested the performance of the leader-follower controller on the
MHT. First, we detailed the results of the turning manoeuvres implemented on the robot in
simulation in order to determine its maximum angular velocity in the steering posture when
the posture controller is executed in the stepping mode. We observed that the robot can turn
at higher turning rates as the velocity of the MHT increases. Then, we tested the leader’s
trajectory constraints presented in Chapter 3 and demonstrated the correspondence between
the satisfaction of these constraints and the MHT’s capability to reach and maintain the
desired leader-follower formation. Next, we evaluated the performance of the leader-follower
controller by implementing a wide range of leader-follower scenarios in LMS Virtual.Lab
Motion and on the physical platform of the MHT. The results of the scenarios showed that
the controller developed in Chapter 3 is able to implement leader-follower behaviours on the
Toolkit. Finally, we presented a leader-follower scenario on a ramp where the MHT achieves
leader-follower behaviour and posture reconfiguration simultaneously.
81
Chapter 5: Leader-Follower Manoeuvres
82
Chapter 6
Conclusions
In this thesis, we presented a leader-follower controller for the Micro-Hydraulic
Toolkit. The objective of the controller was to manoeuvre the MHT towards a desired range
and bearing with respect to a designated leader. Since the inverse kinematics controller
developed in previous work was unable to execute turning manoeuvres on the Toolkit, we
designed a separate controller capable of steering the robot to achieve leader-follower
formation control. The leader-follower controller was implemented in parallel with the
posture controller to allow the wheel-legged robot to achieve leader-follower behaviour and
posture reconfiguration manoeuvres simultaneously.
6.1 Vision Algorithm
To implement the leader-follower behaviour on the MHT, a vision system consisting
of a monocular camera on a pan-tilt unit was installed on the robot. In Chapter 4, we detailed
the vision algorithm designed to track the leader’s position. The vision algorithm uses the
CamShift algorithm to detect the leader in the field of view of the camera, and a combination
of colour-based tracking and contour detection algorithm to estimate the range and bearing of
the leader with respect to the MHT. The results of the tests of the vision system showed that
the vision algorithm performs adequately in a laboratory environment at ranges lower than
2.5 m.
- 83 –
Chapter 6: Conclusion
6.2 Leader-Follower Controller
In Chapter 3, the leader-follower controller for the MHT was developed. The
controller uses the leader-follower framework presented by Choi and Choi [12], which
defines the desired position of the follower based on the desired range and bearing of the
leader in follower’s reference frame. The controller uses the actual range and bearing of the
leader with respect to the Toolkit to calculate the desired wheel angular velocities to steer the
robot towards its desired position. We also established trajectory constraints for the leader to
ensure that the MHT is able to reach the desired leader-follower formation.
To evaluate the performance of the controller, we implemented various leaderfollower scenarios on the MHT in simulation and on the physical robot. First, we
demonstrated that the MHT is able to achieve the desired range and bearing with respect to
the leader when all the leader’s trajectory constraints are respected. Then, we executed
leader-follower scenarios where the leader moves along a linear trajectory, a corner trajectory
and a circular trajectory on a flat surface. In all the leader-follower manoeuvres on flat
surface, the MHT maintained a constant steering posture. The results of the simulations and
of the experiments successfully demonstrated that the leader-follower controller is able to
manoeuvre the Toolkit to achieve leader-follower formation control. We also implemented a
leader-follower scenario where the MHT must negotiate a 10º ramp to achieve the desired
leader-follower formation while maintaining its desired posture. The results of the ramp
manoeuvre showed that the Toolkit is capable of achieving leader-follower behaviour and
posture control simultaneously.
6.3 Future Work
6.3.1 Model Validation
In Chapter 5, we observed discrepancies between the turning rates of the MHT
between its LMS model and the physical platform. These disagreements are caused by the
inaccuracies of the wheel ground contact model, which significantly limit the turning
capabilities of the Toolkit in simulation compared to the physical robot. Therefore,
improvement of wheel-ground contact modeling and validation of the Toolkit’s model should
84
Chapter 6: Conclusion
be performed to allow the user to accurately predict the response of the robot using
simulations.
6.3.2 Turning Manoeuvres on Sloped Terrains
One of the purposes of the MHT is to navigate in an urban environment on uneven
terrain. Due to the small dimensions of the ramps and the possibility that the MHT becomes
unstable, we did not attempt to have the MHT execute any turning manoeuvres while
negotiating a ramp in the leader-follower scenario presented in Chapter 5. Before executing
turning manoeuvres with the Toolkit on a sloped terrain, it is necessary to revisit the dynamic
stability of the robot to ensure that it does not lose stability when manoeuvering on uneven
surfaces. Furthermore, it would be ideal to generate an optimized steering posture for the
MHT to improve the stability of the MHT during turning manoeuvres on a ramp.
6.3.3 Vision System Improvements
The current vision algorithm tracks the position of the leader with respect to the MHT
and achieves satisfactory results in a controlled environment, such as found in a research
laboratory. However, as discussed in Chapter 4, the colour-based tracking methods are not
robust to changes in the illumination of the leader. Since one of the ultimate goals of the
Toolkit is to navigate in an urban environment, the vision system of the robot should be
upgraded to obtain higher quality images of the leader. Then, a more advanced object
tracking algorithm, such as fiducial-based tracking or feature points-based tracking, can be
implemented to track the leader with more robustness to changes in the lighting conditions.
6.3.4 Leader-Follower Controller Improvement
In the development of the leader-follower controller presented in this thesis, we
assumed that only the range and bearing of the leader is available to the MHT to achieve the
desired leader-follower formation. As demonstrated in Section 3.3, the ideal control law for
the follower to maintain the desired formation employs the knowledge of the velocity of the
leader to achieve leader-follower formation control. While the controller implemented on the
85
Chapter 6: Conclusion
Toolkit can manoeuvre the robot to achieve the leader-follower formation control, further
research can be done to improve the performance of the leader-follower controller by using
additional information on the state of the leader.
6.3.5 High-Level Control Development
The work presented in this thesis introduced the Toolkit to high-level control through
the implementation of a leader-follower controller and the expansion of the state machine
framework. Currently, the MHT relies on the user to select the desired mode of operation of
the platform. In future work, further development of the controller can be accomplished to
allow the MHT to autonomously select its mode of operation using exteroceptive sensors,
thus increasing the autonomy of the robot.
86
References
[1]
D. Quoc Khanh and S. Young-Soo, "Human-following robot using infrared camera,"
in IEEE International Conference on Control, Automation and Systems, 2011, pp.
1054-1058.
[2]
W. Burgard, M. Moors, D. Fox, R. Simmons, and S. Thrun, "Collaborative multirobot exploration," in IEEE International Conference on Robotics and Automation,
2000, pp. 476-481.
[3]
N. Agmon, S. Kraus, and G. A. Kaminka, "Multi-robot perimeter patrol in adversarial
settings," in IEEE International Conference on Robotics and Automation, 2008, pp.
2339-2345.
[4]
T. Arai, E. Pagello, and L. E. Parker, "Editorial: advances in multi-robot systems,"
IEEE Transactions on Robotics and Automation, vol. 18, pp. 655-661, 2002.
[5]
T. Balch and R. C. Arkin, "Behavior-based formation control for multirobot teams,"
IEEE Transactions on Robotics and Automation, vol. 14, pp. 926-939, 1998.
[6]
L. Consolini, F. Morbidi, D. Prattichizzo, and M. Tosques, "Leader–follower
formation control of nonholonomic mobile robots with input constraints,"
Automatica, vol. 44, pp. 1343-1349, 2008.
[7]
M. A. Lewis and K.-H. Tan, "High precision formation control of mobile robots using
virtual structures," Autonomous Robots, vol. 4, pp. 387-403, 1997.
[8]
G. W. Gamage, G. K. Mann, and R. G. Gosine, "Leader follower based formation
control strategies for nonholonomic mobile robots: design, implementation and
experimental validation," in American Control Conference, 2010, pp. 224-229.
[9]
R. Hogg, A. L. Rankin, M. C. McHenry, D. Helmick, C. Bergh, S. I. Roumeliotis, and
L. H. Matthies, "Sensors and algorithms for small robot leader/follower behavior," in
Aerospace/Defense Sensing, Simulation, and Controls, 2001, pp. 72-85.
[10]
J. Ghommam, H. Mehrjerdi, and M. Saad, "Leader-follower based formation control
of nonholonomic robots using the virtual vehicle approach," in IEEE International
Conference on Mechatronics, 2011, pp. 516-521.
[11]
H. Poonawala, A. C. Satici, N. Gans, and M. W. Spong, "Formation control of
wheeled robots with vision-based position measurement," in American Control
Conference, 2012, pp. 3173-3178.
- 87 –
References
[12]
I.-S. Choi and J.-S. Choi, "Leader-follower formation control using PID controller,"
in Intelligent Robotics and Applications. vol. 7507, C.-Y. Su, S. Rakheja, and H. Liu,
Eds., ed: Springer Berlin Heidelberg, 2012, pp. 625-634.
[13]
iRobot Corporation. (2013). iRobot Create Programmable Robot. Available:
http://www.irobot.com/us/learn/Educators/Create.aspx
[14]
C. Wong, "Posture reconfiguration and step climbing maneuvers for a wheel-legged
robot," Master Thesis, Department of Mechanical Engineering, McGill University,
Montreal, 2014.
[15]
M. Trentini, B. Beckman, B. Digney, I. Vincent, and B. Ricard, "Intelligent mobility
research for robotic locomotion in complex terrain," in Unmanned Systems
Technology VIII, 2006.
[16]
B. Beckman and M. Trentini, "Kinematic range of motion analysis for a high degreeof-freedom unmanned ground vehicle," DTIC Document, 2009.
[17]
B. Beckman, J. Pieper, D. Mackay, M. Trentini, and D. Erickson, "Two dimensional
dynamic stability for reconfigurable robots designed to traverse rough terrain," in
IEEE/RSJ International Conference on Intelligent Robots and Systems, 2008, pp.
2447-2452.
[18]
B. Beckman, M. Trentini, and J. Pieper, "Control algorithms for stable range-ofmotion behaviours of a multi degree-of-freedom robot," in International Conference
on Autonomous and Intelligent Systems, 2010, pp. 1-6.
[19]
T. Thomson, "Kinematic control and posture optimization of a redundantly actuated
quadruped robot," Master Thesis, Department of Mechanical Engineering, McGill
University, Montreal, 2011.
[20]
K. Türker, C. Wong, and I. Sharf, "Progress Reports on the Micro-Hydraulic
Toolkit," Unpublished reports for Defense Research and Development Canada,
Suffield, 2012.
[21]
B. Li and I. Sharf, "Report for DRDC Suffield on Completion of Task 2,"
Unpublished report for Defence Research and Development Canada, Suffield, 2014.
[22]
FLIR Systems Inc. (2014). Pan-Tilt Unit-D46-17. Available:
http://www.flir.com/mcs/view/?id=53707&collectionid=581&col=53711
[23]
C. Grand, F. BenAmar, F. Plumet, and P. Bidaud, "Decoupled control of posture and
trajectory of the hybrid wheel-legged robot hylos," in IEEE International Conference
on Robotics and Automation, 2004, pp. 5111-5116.
[24]
K. Kozłowski and D. Pazderski, "Modeling and control of a 4-wheel skid-steering
mobile robot," International Journal of Applied Mathematics and Computer Science,
vol. 14, pp. 477-496, 2004.
88
References
[25]
B. Bona and M. Indri, "Friction compensation in robotics: an overview," in 44th
IEEE Conference on Decision and Control, 2005 and 2005 European Control
Conference, pp. 4360-4367.
[26]
G. Ellis, Control system design guide a practical guide. Amsterdam; Boston: Elsevier
Academic Press, 2004.
[27]
J. O'Reilly, "Micro-Hydraulic Toolkit: Manual," Defence Research and Development
Canada, 2014.
[28]
V. Lepetit and P. Fua, "Monocular model-based 3d tracking of rigid objects: a
survey," Foundations and trends in computer graphics and vision, vol. 1, pp. 1-89,
2005.
[29]
M. Fiala, "ARTag, a fiducial marker system using digital techniques," in IEEE
Computer Society Conference on Computer Vision and Pattern Recognition, 2005,
pp. 590-596.
[30]
M. Fiala, "ARTag fiducial marker system applied to vision based spacecraft
docking," in IROS Workshop on Robot Vision for Space Applications, 2005, pp. 3540.
[31]
J. L. Giesbrecht, H. K. Goi, T. D. Barfoot, and B. A. Francis, "A vision-based robotic
follower vehicle," in Unmanned Systems Technology XI, 2009.
[32]
D. Roller, K. Daniilidis, and H.-H. Nagel, "Model-based object tracking in monocular
image sequences of road traffic scenes," International Journal of Computer Vision,
vol. 10, pp. 257-281, 1993.
[33]
G. R. Bradski, "Real time face and object tracking as a component of a perceptual
user interface," in 4th IEEE Workshop on Applications of Computer Vision, 1998, pp.
214-219.
[34]
J. Sattar, P. Giguere, G. Dudek, and C. Prahacs, "A visual servoing system for an
aquatic swimming robot," in IEEE/RSJ International Conference on Intelligent
Robots and Systems, 2005, pp. 1483-1488.
[35]
S. Benhimane and E. Malis, "Real-time image-based tracking of planes using
efficient second-order minimization," in IEEE/RSJ International Conference on
Intelligent Robots and Systems, 2004, pp. 943-948.
[36]
S. Benhimane and E. Malis, "Homography-based 2d visual tracking and servoing,"
The International Journal of Robotics Research, vol. 26, pp. 661-676, 2007.
[37]
C. Harris and M. Stephens, "A combined corner and edge detector," in Fourth Alvey
Vision Conference, 1988, pp. 147-151.
89
References
[38]
C. Tomasi and T. Kanade, Detection and tracking of point features: School of
Computer Science, Carnegie Mellon Univ. Pittsburgh, 1991.
[39]
D. G. Lowe, "Object recognition from local scale-invariant features," in Seventh IEEE
International Conference on Computer Vision, 1999, pp. 1150-1157.
[40]
Itseez. (2014). OpenCV Website. Available: http://opencv.org/
[41]
Nutonian Inc. (2014). Eureqa Desktop. Available:
http://www.nutonian.com/products/eureqa/
90
© Copyright 2026 Paperzz