Brigham Young University

Brigham Young University
2006 AUVSI Student Competition
Student Team and Advisors
(alphabetical order)
Students
Neil Johnson - MS Mechanical Engineering
Paul Millett - BS Electrical Engineering
Matt Nokleby - BS Electrical Engineering
Breton Prall - BS Electrical Engineering
Nathan Rasmussen - BS Computer Science
Andrés Rodrı́guez - BS Electrical Engineering
Faculty
Dr. Randy Beard - Department of Electrical and Computer Engineering
Dr. Tim McLain - Deparment of Mechanical Engineering
Dr. Clark Taylor - Department of Electrical and Computer Engineering
Abstract
This paper describes the engineering of an mini-unmanned air vehicle (mUAV)
to accomplish the AUVSI mission objectives. The objectives require the mUAV to
autonomously search a potentially dynamic area and geolocate targets. The paper
describes the autopilot architecture, the mUAV airframe, the ground control station,
and the software used in the system. Safety and other features are described. The
mUAV is tested for its ability to autonomously takeoff, plan paths dynamically, follow
waypoints, geolocate targets, and autonomously land.
1
Introduction
Research to improve the capabilities of autonomous aircraft is a growing interest in military,
private sectors, and academia. Brigham Young University (BYU) encourages this research
through the work performed at BYU’s Multiple Agent Intelligent Coordination and Control
(MAGICC) Lab. The MAGICC Lab is staff primarily by graduate students, however, this
year undergraduates students had the opportunity to work with the MAGICC Lab for one of
1
Figure 1: BYU’s mUAV next to a yard stick.
their senior projects. They gained experience in the design, development, and implementation of mUAVs. Some of these students were given the opportunity to improve their designs
and participate as a team in the Student UAV Competition sponsored by the Association
for Unmanned Vehicle System International (AUVSI). “Los Buscadores” (Spanish for “The
Searchers”) is this year’s team representing Brigham Young University.
The AUVSI Student Competition requires an UAV to autonomously traverse a given
terrain and find, count, and geolocate groundbase targets of no less than 4 feet by 4 feet.
The UAV must avoid no-fly zones, have the capacity to adapt to a new search area while
airborne, and maintain a flying altitude between 50 ft and under 500 ft. Extra points are
awarded for higher image resolution and autonomous takeoff and landing. The mission must
be completed within 40 minutes.
This paper describes the merits of our mUAV system that satisfies the AUVSI mission
guidelines. First, a description of the mUAV hardware system is given. This includes a
description of the airframe, the autopilot and its sensors, the onboard vision hardware and
the communication devices. Second, a description of the mUAV software is given. This
includes a description of the autonomous takeoff/landing, the fail-safes, the ground control
station, the dynamic path planning, the image processing algorithms.
2
mUAV System Hardware Description
The system hardware is composed of the airframe, the autopilot and its sensors, the onboard
vision hardware, and the communication devices. A description of each component follows:
2.1
Airframe
The airframe used is a flying wing with expanded payload bay and servo-driven elevons
designed by the MAGICC Lab (Figure 1). It has a wingspan of 152 cm, a length of 58
cm, and a width of 12 cm. It weights 4.5 lbs with the equipment and 2.5 lbs without. It is
propelled by a brushless electric motor which uses an electronic speed control and is fueled by
4 multi-cell lithium polymer batteries. Although the propeller is exposed, we have included
an on/off switch for added safety. At this time we would like to bring attention to other
features of our mUAV which make it particularly safe. These include: the use of batteries
2
Figure 2: KestrelTM Autopilot version 1.45.
instead of combustible fuel, the inability to cause major damages due to its light weight, and
the failsafe modes which are explained in detail later on.
2.2
Autopilot
r
The aircraft carries a KestrelTM Autopilot (Figure 2) manufactured by ProcerusTechnologies
[13]. The KestrelTM Autopilot system uses a Rabbit Semiconductor microprocessor which can
be programmed using Dynamic C (a variant of ANSI C programming with special operators
that take advantage of the Rabbit microprocessor’s abilities).
The source code for the KestrelTM Autopilot was primarily developed by the MAGICC
Lab. A detailed description of the hardware and software architectures used and the control
algorithms associated with the autopilot can be found at [6]. We made additional software
designs and alterations to adjust and improve the software to our needs as this paper will
explain.
TM
r
Under 40 grams, the ProcerusKestrel
Autopilot is small, light, and robust. The
autopilot has rate gyros and accelerometers to cover the three-axes of motion for attitude
estimation. Two other sensors are used to measure altitude and airspeed: An absolute
pressure sensor is used to estimate altitude. A pitot tube, which is exposed to the outside
of the mUAV is connected to a differential pressure sensor. This sensor uses the difference
between internal and external pressure to determine airspeed. The autopilot is connected
to an external GPS unit. The GPS used is a Furuno Binary operating at 9600 baud, 3.3
volt TTL [3] (Figure 3). To improve communication with satellites, our GPS unit was
placed away from possible sources of interference such as the modem antenna and the video
transmitter. With GPS reception, the aircraft is able to estimate its position, ground speed,
ground track, and heading.
3
Figure 3: Furuno Binary GPS.
r
Figure 4: (left) Aerocomm wireless modem. (right) ProcerusCommbox.
2.3
Communication Devices
The digital radio modem is a key element of the autopilot system because it creates the data
link between the mUAV and the ground station. The Aerocomm 900 MHz wireless modem
[1] is used (Figure 4). It is attached through a coaxial cable to a dipole communication
r
antenna. The ground station uses the ProcerusCommbox
which contains the same modem
as the mUAV. It receives updates on the position and status of the aircraft, passing that
information to our ground control station and transmits commands and waypoints.
2.4
Onboard Vision Hardware
The ability to see from our mUAV is crucial to mission completion for this is how we will
find, count, and localize targets. High resolution is important to allow the user to identify
these targets. Following is a description of the onboard vision system which will realize these
expectations.
The BYU flying wing airframe is equipped with a MAGICC Lab designed servo-driven
gimbaled camera (Figure 5). The gimbal azimuth axis has approximately 135 degrees of
4
Figure 5: (left) Gimbaled camera as it sits in an upside-down airframe. (right) Closer view
of the camera.
travel while the elevation axis has 100 degrees. Both servos are driven using RC aircraft
servos controlled by a Pololu micro serial external driver board that attaches to a serial port
on the KestrelTM Autopilot [12]. The camera used is the KX-141 Color CCD camera provided
through Black Widow AV [2]. This camera is optimal because coupled with the transmitter
described below, it transmits 30 frames a second of 640x480 interlaced color video in the
NTSC standard.
The video transmitter and receiver set we used is a Black Widow AV 2.4GHz 600mW
[2]. The video is fed into an ImperX VCE frame grabber, which converts the video signal to
digital images for processing in the ground control station.
3
mUAV System Software Description
The system software includes the autopilot and the ground control station. The autopilot
contains the software for the low-level control of the aircraft, including waypoint following, autonomous takeoff/landing and fail-safes modes. To accomplish these tasks we chose
the MAGICC Lab software for the Kestrel Autopilot which includes most of the autopilot
funcionality. However, we had to configure the auto takeoff/landing and failsafes to meet
AUVSI mission requirements. The ground station software must act as a user interface
to send commands to the mUAV as well as include path planning and image processing
capabilities.
A description of the software we use for autonomous takeoff/landing, the fail-safes modes,
the ground control station follows:
3.1
Autonomous Takeoff / Landing
Our mUAV employs mechanisms for autonomous takeoff and landing. Our airframe requires
a hand-thrown takeoff. After the mUAV is launched, the autopilot utilizes the aircraft’s
pitch and throttle to gain speed. When a specified airspeed is reached, the mUAV pitches
5
Figure 6: The mUAV’s landing procedures.
up and begins to spiral around the takeoff location while gaining altitude. Once the mUAV
reaches its desired altitude, it exits takeoff mode and begins following waypoints.
For autonomous landing, the user specifies a landing point and an approach point. The
mUAV will spiral around the approach point at a specified airspeed and descent rate. After
reaching a specified altitude, the mUAV “breaks” from the spiral and continues descending
towards the landing location [11] (Figure 6).
3.2
Fail-safe Modes
Fail-safes refer to modes of aircraft control that act as a safety net in the case of system
failure. If anything should happen where we do not have control over the aircraft, these
modes allow the mUAV to return safely to the ground. The fail-safes described by the
competition have a catastrophic effect upon a flying wing aircraft, causing it to lose control
and spiral into a crash. Upon approval from AUVSI, more suitable fail-safes are implemented
on this model. Our mUAV is programed with fail-safe modes to handle the following three
failures:
• lost GPS signal
• lost communication with the ground station
• unexpected or anomelous UAV behavior in flight
6
This section elaborates on the functionality and effectiveness of these fail-safe modes.
3.2.1
Lost GPS fail-safe mode
The autopilot relies on GPS coordinates to determine the location of mUAV; losing signal
from GPS satalites will cause the autopilot to malfunction. If GPS signal is lost for more
than 5 seconds, the autopilot will enter Lost GPS Fail-safe mode. In this mode, the aircraft
will perform a 15◦ bank and put itself in a loiter. If the mUAV does not reacquire a GPS
signal after 5 minutes the autopilot will cut throttle and the mUAV will spiral safely down
to the ground.
3.2.2
Lost Communication fail-safe mode
Communication is essential to control the aircraft, however, if communication is lost the
mUAV will continue to execute the last commands it received, even though no additional
commands may be given. For this reason, the aircraft can safely fly in autonomous mode
without ground station communication for much longer than without a GPS signal. If
communication with the ground station is lost for more than 10 seconds, the mUAV will enter
”Lost Communication Fail-safe mode“, in which it will return to a predetermined location,
denoted GPS home, and circle it while waiting for communication to be reestablished. Once
communication is reacquired the mUAV will resume the current mission. If communication
is not reestablished within 5 minutes, the mUAV will spiral down to the ground and safely
land.
3.2.3
Manual override
When dealing with any novel or develping technology, unexpected behavior often occurs.
Autonomous mUAV flight is no different. To enable us to handle any anomelous behavior
or autopilot failures other than the two already mentioned, we have added a Maual Overide
to our autonomous mUAV system. Unlike the other fail-safe modes, this mode requires user
intervention. At any time during autonomous flight, the Remote Control (RC) pilot may
bias the autopilot commands through the RC controller. This will not completely override
the autopilot; it will only influence the control surface motion. If the RC pilot determines
that the autopilot is no longer doing what it should, she can enter the manual override mode,
denoted Pilot-In-Control (PIC), by flipping a switch on the RC controller. With the system
in PIC mode, the RC pilot has total control over the aircraft.
3.3
Ground Control Station
The ground station must handle all communication with the mUAV and send navigation commands as well as create a useful interface for the user to implement vision and path planning.
For this task we have chosen the Virtual Cockpit (VC) software (Figure 7) developed in the
MAGICC Lab. VC is Windows-based ground stations software for the KestreTM Autopilot.
The VC allows operators to configure and monitor the navigation, control and vision areas of
7
Figure 7: Screen shot of Virtual Cockpit with 4 waypoints shown.
the system. The navigation functionality includes setting, changing and uploading waypoints
to the mUAV. The user can also monitor low level control performance and tune control loop
gains. The software provides real-time mUAV status information such as GPS, altitude, airspeed and aircraft pose displayed on an artificial horizon. The vision area displays video
streamed from the mUAV and has a built-in framework which allows us to develop and
use our own processing algorithms within the VC. It is here where we perform our tracking
and localization which will be described later. The VC does not come with path planning
functionality, but we were able to customize it to be able to run our path planner within its
environment.
3.4
Path planning
The path planning algorithm allows the mUAV to effectively search a given area while
avoiding no-fly zones (Figure 9). The algorithm generates a set of nxn grid points (waypoints)
evenly spaced throughout the terrain. Waypoints that are in no-fly zones are eliminated. The
path is planned by using the smoothed Rapidly-exploring Random Tree (RRT) algorithm
[9] to find a safe path from the current point to all remaining grid points (Figure 8). We
select the grid point with the minimum-distance path. This path is added to the overall
path, and the current point is updated. This process is repeated until all grid locations have
been visited.
8
Figure 8: The randomly generated points used in the Rapidly-exploring Random Tree.
Figure 9: (left) Path plan simulation. (right) A path plan to search while avoiding trees.
The AUVSI competition requires that the path planner be able to respond to dynamic
search regions and no-fly zones. Planning a new path based on the new search criteria is a
relatively simple process. However, we desire that the dynamically-planned path does not
search areas that have already been visited. Thus, we keep a record in the VC of locations
that the aircraft has already visited. When planning a new path mid-flight, we feed this
record into the path planner along with the new search criteria. The path planner plans the
new path, ignoring areas which have already been visited.
This algorithm does an excellent job of exhaustively searching a region while avoiding
no-fly zones. Although repeatedly invoking the RRT algorithm is computationally expensive,
for the purposes of the competition the path is planned sufficiently quickly. The algorithm,
however, does not consider the turning radius of the mUAV, often resulting in paths that
contain sharp turns. While the airframe used may only approximate these difficult turns,
the algorithm still provides a satisfactory solution to the path planning problem.
9
3.5
Image Processing
Our vision system is designed to find targets and calculate GPS locations for those targets
based on image and telemetry data. We designed robust algorithms which do not need much
a priori information. Targets are dynamically selected by the user during flight and feature
based tracking algorithms are used to track targets across multiple frames. Given a target’s
pixel coordinates in each frame, a least squares approach is used to find the GPS coordinates
of the target.
3.5.1
Gimbaled Camera Control
The gimbal can be aimed using several modes including manual adjustment and ground
target aiming. Typically, we use the second mode in which the gimbal points its camera
at specified Universal Transverse Mercator (UTM) coordinates of a point of interest on the
ground. As we refine our geolocation estimate, the gimbal adjusts to point more accurately
at the intended target.
3.5.2
Tracking
In order to successfully geolocate targets, we must be able to track the targets through several
frames. There are many different methods to do this. Over the course of our preparation
we have developed two different methods to do our tracking: feature correspondence and
template matching. Each of these different methods has specific strengths and weaknesses.
We have continued to work with both as we do not know which will be best at tracking the
targets we will need to find during this competition.
Feature Correspondence This method takes N features from a frame and finds their
location in the next frame. An Open Source Computer Vision Library (OpenCV) [4] function
is used to match features using a Lucas kanade Pyramidal Scheme [7]. When corresponding
features (points) on a planar surface are found in two frames or images taken from different
perspectives, a linear mapping known as ”Homography“ can be calculated to transform
points in one frame to the other. This relationship is shown as
x1 = Hx2
(1)
where x1 and x2 are points of the form [x y 1]T and H is the homography matrix.
This mapping is only valid if everything in the scene being photographed is on a single planar surface [8]. Because we are flying high above the ground this is a reasonable
assumption. In this system “good” features in the image are tracked between images, and
the homography mapping is computed (Figure 10). The coordinates of the target clicked
on by the user can then be computed in each frame. The strengths of this system lie in its
capability of tracking targets through large motion of the camera. There is some slipping
of the tracking over time due to noise in the image and because the ground is not perfectly
10
Figure 10: Vectors showing tracking of features in an image.
planar. To compensate for this gradual slipping, the capability to manually refine the image
target location estimate was added.
Template Matching This tracking system makes a template around the point the
user clicks that matches in the following image. A threshold is calculated to determine if
the match is good enough. A 50x50 pixel template with a 150x150 search area provides
sufficient results in tracking ability and speed (Figure 11). To be able to track rotational
motion as well as translational, the template block is updated after every search. To reduce
the possibility of slipping off the target, a copy of the original template is saved. Every other
frame, two searches are performed: one with the previous best match template and one with
the original template. The best match is then selected using a statistical weighting. This
weighting provided high resilience to image noise and camera jitter. The strongest feature
of this method is its ability to recapture a target that has been temporarily lost due to
quick camera motion or jerk. OpenCV functions and capability to store and process regions
of interest (ROIs) within an image made this method very simple and straight forward to
implement. Weaknesses include inability to track large jumps in camera motion, and the
tendency to slide along edges in the image.
3.5.3
Target Localization
By tracking the target through multiple frames, as described previously, several line trajectories can be generated. We would like to find the intersection of all these trajectories. This
however, is not possible due to noise and other disturbances. Thus we seek to solve the
equation
p = arg
min
λ1 ,λ2 ,...,λn
k(c1 + λ1 x1 ) − (c2 + λ2 x2 ) − . . . − (cn + λn xn )k2
(2)
where c + λx defines a trajectory, p is the location of the target in three space and n is
the number of trajectories. This equation represents the least squares equation we want to
11
Figure 11: Block search based feature tracking. Small box is template, large box is search
area.
solve, or the minimum error between trajectories. A concise matrix equation can be formed
from this equation and solved [10].
4
Conclusion
Meeting the mission requirements required several months of developing, designing, implementing, and interfacing. With the assistance of MAGICC Lab students, we built an airframe
and trained pilots to fly by RC control. Autonomous flight was achieved by choosing and
refining control loop gains for the KestrelTM Autopilot. The vision and path planning algorithms were implemented and tested in the lab. We used the open source simulator ”Aviones“
[5] to test our algorithms. Once we were pleased wih the results, we began the task of testing the algorithms in actual flight. After some refining, our mUAV is able to autonomously
takeoff, follow waypoints, and autonomously land. We are confident in the ability of our
solution to fully complete all the requirements of the AUVSI Student Competition.
5
Acknowledgments
r
We are grateful to our sponsors for their support: ProcerusTechnologies,
Raytheon, L3
Communications, and The International Foundation for Telemetering (IFT). We also express appreciation to the research students at the MAGICC Lab and the engineers at
r
ProcerusTechnologies
for their time and patient mentoring.
12
References
[1] Aerocomm, 2006. http://www.aerocomm.com.
[2] Black widow av, 2006. http://www.blackwidowav.com.
[3] Furuno GPS OEM/timing division, 2006. http://www.funurogps.com.
[4] Sourceforge, 2006. http://sourceforve.net/projects/opencv.
[5] Sourceforge, 2006. http://sourceforve.net/projects/aviones.
[6] Randal Beard, Derek Kingston, Morgan Quigley, Deryl Snyder, Reed Christiansen, Walt
Johnson, Timothy Mclain, and Mike Goodrich. Autonomous vehicle technologies for
small fixed wuaving UAVs. IEEE Transactions on Image Processing, 11(12):1442–1449,
Dec 2002.
[7] Jean-Yves Bouguet. Pyramidal implementation of the lucas kanade feature tracker.
description of the algorith. Technical report, Intel Corporation, Microprocessor Research
Labs, 2000.
[8] Richard Hartley and Andrew Zisserman. Multiple View Geometry in Computer Vision.
Cambridge University Press, 2004.
[9] Steven M. LaValle. Rapidly-exploring random trees: A new tool for path planning.
Technical report, Iowa State University, 1998.
[10] Yi Ma, Stefano Soatto, Jana Kosecka, and S. Shankar Sastry. An Invitation to 3-D
Vision: From Images to Geometric Models. SpringerVerlag, New York, NY, 2004.
[11] Morgan Quigley, Blake Barber, and Michael A. Goodrich. Towards real-world searching
with fixed-wing mini-UAVs. In IEEE International Conference on Intelligent Robots
and Systems (IROS), Edmonton, Alberta, Aug 2005.
[12] Morgan Quigley, Michael A. Goodrich, Steve Griffiths, and Andrew Eldredge. Target
acquisition, localization, and surveillance using a fixed-wing mini-uav and gimbaled
camera. In IEEE International Conference on Robotics and Automation, Barcelona,
Spain, April 2005.
[13] Procerus Technologies. Kestrel autopilot system, 2006. http://www.procerusuav.
com/Documents/Developers_Kit.pdf.
13