An Exploratory Framework for Introspective Machine Learning of

FINAL GRANT REPORT
An Exploratory Framework for Introspective Machine Learning of
Helicopter Flight Dynamics
Dan Tappan
Associate Professor
Computer Science
Introduction
Autonomous aircraft have become worldwide an $11 billion yearly industry, a good portion of
which is in this region. Of the 1,250+ aviation-related companies throughout the state, Spokane is
home to the second largest set outside of the Seattle area. Companies at all levels of the supply
chain are continually working on refining and perfecting their contributions to the larger industry.
This work focuses on flight aspects. Machines can learn to fly well for many mainstream
purposes already, but most approaches are very disconnected from the way human pilots learn to
fly. The computational models provide little insight into the learning process of either group. A
better understanding would advance the field of artificial intelligence and autonomous intelligent
systems by extending this knowledge to other environments where machine learning could be
advantageous. The original grant proposal described a plan to build an acquisition system for
recording flight data from a full-sized Robinson R22 helicopter (Figure 1); to collect data from basic
flight maneuvers as teaching examples of how to perform them; to investigate data processing and
fusion techniques to merge data from numerous repetitions of maneuvers done to account for
variation and errors; to build a rudimentary software flight-dynamics model based on the nature of
the
collected
data;
and
to
investigate
machine-learning
techniques,
especially
genetic
programming, to allow the system to learn and explain how to perform the same actions as the
human pilot.
1
Figure 1
Figure 2
A key limitation of most existing work is that—for performance and efficiency—machines
learn to fly irrespective of how a human would learn, so their approaches offer little insight into the
learning and flying process itself. As any human helicopter pilot will self-deprecatingly admit,
learning to fly is a ridiculously unsuccessful process until it finally works, and then they cannot
articulate how they made this magical transition. Humans are not introspective and expressive
enough to convey this practical experience and wisdom to each another. The novel goal in this
work was to develop the main components of a proof-of-concept autonomous flight system that
can not only learn how to fly to a reasonable degree, but more importantly, it can also perform
self-reflection and meta-analysis to explain how it does so and learned to do so.
Work Accomplished / Results
The accomplishments and results are best discussed together with respect to the objective they
align with the most from the grant proposal.
Objective 1: To build an acquisition system for recording flight data from a full-sized helicopter.
There are two aspects to the acquisition system. The first involves the collection of non-visual data
from a variety of sensors that provide inputs to the system that generally correspond to what a
2
human pilot perceives (e.g., balance, sense of movement), as shown in Figure 2. These sensors
were investigated in great detail and ultimately became the basis of an entire Master's thesis
defended by Josh Czoski: A Violin Practice Tool Using 9-Axis Sensor Fusion. Figures 3a and 3b
respectively show part of the hardware and the results.
Figure 3a
Figure 3b
The second aspect involves the collection of visual data from an array of inexpensive
cameras that provide inputs generally corresponding to what a human pilot sees (Figure 4). This
aspect involved two parts. One records the state of the three primary flight controls: cyclic and
collective pitch controls and anti-torque pedals. The second records the state of the four primary
flight instruments of interest: altimeter, airspeed indicator, vertical speed indicator, and manifoldpressure gauge. The vision processing ultimately became the entire Master's thesis defended by
Matt Hempleman: Image Processing for Machine Learning of Helicopter Flight Dynamics .
Figure 4
3
Objective 2: To collect data from basic flight maneuvers as teaching examples of how to perform
them under various conditions.
Matt Hempleman's thesis required a significant amount of hands-on engineering experimentation
to find acceptable solutions. By no means was the final solution optimal, but it is usable. Most of
the 18.1 hours of flight time went toward trying different configurations under different flight
conditions (Figure 5). While the acquired data are generally valid (some experiments were complete
failures), they are not consistent enough to use directly for the machine learning in Objective 5.
Nevertheless, this work did lay a solid foundation for follow-on work.
Figure 5
Objective 3: To investigate data processing and fusion techniques to merge data from numerous
repetitions of maneuvers done to account for variation and errors.
Josh Czoski's thesis addressed multisensor acquisition and fusion for non-visual data. He
investigated various filtering techniques, especially Kalman filtering, which appear very promising
for this objective. Similarly, Matt Hempleman's thesis addressed the same for visual data. The
image-processing techniques he implemented and refined greatly helped improve the quality of the
results. As mentioned in Objective 2, the lack of repetition in the flight tests limits the immediate
usefulness of these processing methods. However, the work by these graduate students and by the
4
principal investigator built a viable framework. Figures 6 and 7 respectively show the results of
visualizing the results through Google Maps and a tool developed for this project.
Figure 6
Figure 7
Objective 4: To build a rudimentary software flight-dynamics model based on the nature of the
collected data.
A preliminary version of the flight-dynamics model is complete. Its role as a computer program is
to allow the computer to make the same inputs and consequently see the same outputs as the
human pilot would. The model is a major simplification, and it still needs significant refinement
before it can be used for machine learning. For one, the underlying representation (Figure 8) is not
clear or intuitive.
5
Figure 8
However, a realistic interface (Figure 9) was developed to help make sense of it.
Figure 9
A fully functional version was fielded to a group of 32 test subjects in the PI's CSCD 350
Software Engineering course. This work resulted in the publication of a paper at the Modern
Artificial Intelligence and Computational Science conference, with Matt Hempleman as coauthor:
Toward Introspective Human Versus Machine Learning of Simulated Airplane Flight
Dynamics.
6
Objective 5: To investigate machine-learning techniques, especially genetic programming, to allow
the system to learn and explain how to perform the same actions as the human pilot.
This objective is still in its infancy. The rudimentary aspects have been investigated in the PI's
CSCD 480 Intelligent Systems course. The architecture for larger-scale applications is far enough
into development to have produced a publication on fly-by-wire flight-control systems (Figure 10) at
the IEEE Frontiers in Education conference: A Quasi-Network-Based Fly-by-Wire Simulation
Architecture for Teaching Software Engineering.
Figure 10
The long-term objectives are to build a highly expressive, introspective learning framework
to investigate aviation-related themes from advanced perspectives, as well as to establish and foster
a collaborative multidisciplinary work environment with regional industry. In addition, it is intended
to increase opportunities for the EWU engineering and computer science programs to collaborate
with this industry. The helicopter aspects, in particular, strengthened EWU's relationship with
Inland Helicopters, Inc. at Felts Field in Spokane. Similar effort is underway with Northwest Flight
School and Aircraft Solutions at Spokane International Airport. Through another paper related to
this work (Multiagent Test Range: Fostering Disciplined Software Engineering Practices in
7
Students via Modeling, Simulation, Visualization, and Analysis) EWU now has national
recognition as a sponsor institution for the Alabama Modeling and Simulation Conference in
Huntsville, which is the worldwide center of such activities. Finally, in conjunction with the EWU
Office of Global Initiatives and Career Services, the engineering and computer science programs
are poised now to participate in an ongoing student international internship and exchange
program with the University of Passau in Germany and a huge multinational corporation, ZF
Friedrichshafen. The PI will be coordinating this effort in Germany this summer.
Work to Accomplish
The primary goal of this grant work was to investigate a wide range of valuable activities in order
to seed future opportunities. The current work still needs refinement to achieve an adequate level
of credibility. To this end, it especially needs more flight time. The proposal dedicated half the
flight time to development, and the other half to actual collection. In reality, the development
process was far more complex than expected. It was further complicated by integrating this work
into two Master's theses, which operated on different timelines. Weather and aircraft availability
also played a significance role.
While not strictly part of the original proposal, two parallel efforts have converged to
produce a goldmine of publication opportunities. This work operates at a low level to learn to fly a
single virtual aircraft. It can also operate at a higher level in terms of navigation and coordination
of multiple aircraft. The framework for this aspect is already in place (Figure 11). It was published
in the Journal of Computing Sciences in Colleges as A Holistic Multidisciplinary Approach to
Teaching Software Engineering Through Air Traffic Control.
8
Figure 11
The details of this aspect are well beyond the scope of this report. However, they integrate
very nicely with the overall perspective of learning how to fly maneuvers appropriately. Figure 12
provides a variety of examples, which to some degree correspond to procedures that every
automobile driver is familiar with; e.g., stop signs and lights, turns, parking.
Figure 12
9
Finally, almost all of this related work eventually finds its way into the courses the PI
teaches. The next series will involve an accident-reenactment simulator (Figure 13) similar to the
what the National Transportation Safety Board (NTSB) uses. Specifically, they use it to understand
what happened and then to learn how to prevent such an event again. This approach aligns well
with the ongoing effort in this work.
Figure 13
Deviations
There were no substantial deviations from the original proposal. The total budget was $5,245.00,
of which $5,243,66 was expended, coming in at $1.34 under budget. The expenditures fell into
two categories. Under Services, the budgeted flight time was 20 hours for $4,400.00. The actual
values were respectively, 18.1 and $4,316.00, or $84.00 under budget. Under Parts, the
budgeted amount was $845.00. The actual value was $927.66, or $82.66 over budget.
10