Download Full Paper

Autonomous People Mover: Adding Sensors
Nathan Biviano, Madeleine Daigneau, James Danko, Connor Goss, Austin Hintz, Sam Kuhr,
Benjamin Tarloff, John Kaemmerlen, Raymond Ptucha
Rochester Institute of Technology
ABSTRACT
It is no longer a question if self-driving cars will transform
society, but when. By the mid-2020’s, most agencies predict
autonomous driving will transform the automobile market. These
cars will make our roadways safer, our environment cleaner, our
roads less congested, and our lifestyles more efficient. Because
of safety, manufacturing costs, and limitations of current
technology, autonomous off-road vehicles, such as people
movers, will probably emerge in large industrial complexes
before autonomous high-speed highway vehicles. A three year
multidisciplinary capstone project is underway which will
transform a golf cart into an autonomous people mover. In year
one, the cart was converted to remote control. In years two and
three, tightly integrated but independent multidisciplinary senior
design teams will enable the cart to drive autonomously in
controlled and natural conditions respectively. The cart will
include advanced sensing and vision technologies for navigation,
and use innovative audio and vision technologies to communicate
with passengers. This paper will describe several factors to
consider when forming capstone engineering student design
teams in academia, and then discuss specific issues relative to this
project. Detailed design considerations and safety issues, along
with the necessary steps and parts, are covered. The paper will
conclude with the year three plans to convert the golf cart into a
fully autonomous people mover and beyond.
1. INTRODUCTION
Autonomous automobiles offer passengers the ability to sit back
and watch a movie instead of white knuckling the steering wheel
during today’s hectic commutes. Can you imagine a world were
texting and driving is encouraged, no drunken drivers, no worries
about the elderly drivers who left their glasses at home, and
friendly waves to the driver next to you that is shaving, eating
breakfast, and talking with his boss on the way to work?
Automobiles will be able to drive closer to one another making
commutes faster, our roads less congested, our environment
cleaner and lives more efficient [1-4]. Most importantly, car
transportation will be safer. Honda, Nissan, Mercedes-Benz,
Volkswagen, Tesla, and five other top auto manufacturers have
already been given permission to test their autonomous cars in the
state of California.
Information Handling Services (IHS) Automotive, the world’s
top automotive industry forecaster, estimates that in the 2020’s
the autonomous vehicle will begin to take over the market. IHS
Automotive predicts that the number of autonomous cars will
grow from 230,000 in the year 2025 to 11.8 million by the year
2030 to 54 million by the year 2035, to virtually all cars and trucks
by the year 2050 [1]. In 2014, Induct Technology started
experimenting with the world’s first commercially available
driverless vehicle- an open air minibus for college and corporate
campuses that can top out at 12 mph. Google’s autonomous cars
have logged over 2 million miles, with a public offering
anticipated by 2020. Self-driving 18-wheelers are already being
tested by German automobile company Daimler on the roads of
Nevada.
The U.S. economy could save up to $40B/year for each 10% of
American cars that are converted to full autonomy [4]. Despite all
the anticipation, there are some big questions that need to be
resolved. Will these cars be able to speed in case of an
emergency? When an object is blocking a single lane highway,
will it be able to go around it? When an accident is unavoidable,
will it choose to hit the car on the left or the food truck parked on
the shoulder? Will lawyers sue the family in the car, the auto
manufacturer, the insurance company, or perhaps the software
company that programmed ill logic? Despite these questions,
what started with cruise control, is now driver assist, will develop
into highway autopilot, and finally into full autonomy. From the
U.S. Department of Transportation (USDOT), to the National
Science Foundation (NSF), to large private grants, big money is
exchanging hands to bring about this transformation. In the past
few years, autonomous vehicle research at both the private and
university level has experienced a resurgence. As evidence, the
USDOT Moving Ahead for Progress in the 21st Century Act
provided $72M in each of 2013 and 2014. In 2013, the USDOT
Research and Innovative Technology Administration
appropriated $63M to 33 University Transportation Centers [7].
In 2015, Nokia earmarked $100M for connected vehicles, Toyota
appropriated $50M for intelligent car research, and even Apple
unofficially entered the autonomous car race. Several universities
have instituted autonomous driving projects, most inspired by the
DARPA Grand Challenges from 10 years ago [5,6]. Despite an
initial boom, expensive sensors and the need for large corporate
sponsorship forced most universities to discontinue
research. Sensor costs for high speed driving are still very high,
but have since decreased dramatically for low speed driving. The
algorithms, including localization, obstacle avoidance, and
navigation, are very similar for high vs. low speed
driving. Today, autonomous car research is hotter than ever. The
University of Michigan has created Mcity, an entire simulated
city devoted to autonomous car driving under disparate driving
conditions. Virginia Tech has received millions of dollars for
their connected vehicle infrastructure research, and Carnegie
Mellon University wants to double the size of their autonomous
robotics program.
To enhance the educational experience, many universities include
a capstone project to integrate engineering theory and processes
in a multi-disciplinary setting. Following a sound engineering
design process, project teams start with customer needs,
determine specifications, evaluate solutions, select methods and
components, and then design, build, and test a prototype which
meets these requirements. The goals of these capstone projects
include: 1) analysis of customer requirements and engineering
specifications; 2) develop creative solutions to tough problems
using theory from a broad range of multidisciplinary courses; 3)
obtain first-hand experience with the engineering design process;
4) documentation of the necessary engineering steps from product
conception to product delivery; 5) learn how to communicate
technical content in both oral and written form; 6) gain practice
working in a team environment; 7) understand the rigors of
developing and following a detailed budget and schedule; 8)
understand how to break complex problems down into
manageable components; and 9) discover how to make effective
design decisions which will maximize customer satisfaction.
This paper describes how one university has used the
multidisciplinary capstone project design process to enter the
field of autonomous driving. The engineering student design team
is tasked to convert a low speed golf cart into an autonomous
people mover. This paper describes the year-two efforts to
convert a golf cart into an autonomous vehicle under controlled
conditions using state-of-the-art sensors and algorithms. Year
three efforts will teach the car how to drive autonomously in
natural conditions.
2. BACKGROUND
The Kate Gleason College of Engineering at Rochester Institute
of Technology includes a two-semester multidisciplinary senior
design project. This project-based course requires students from
multiple engineering disciplines to work on teams, each tasked
with building a project that meets customer requirements. The
team must create specifications and address issues and risks to
ultimately deliver a tangible product to the customer. A faculty
guide assures the team practices sound engineering
methodologies and one or more faculty champions who have a
vested interest in the project provide technical assistance. The
team must identify and recruit other technical consultants as
necessary from both academia and industry. Each project has a
sponsor or customer, who is the ultimate recipient of the final
product. The team must extract meaningful specifications from
the customer, and then ensure customer satisfaction throughout
the process as unforeseen problems arise.
The autonomous people mover project is a three-year
multidisciplinary senior design (MSD) project. Projects from
other universities [8-11] have attempted to tackle similar
problems. The project in this paper leverages learnings from
others along with improvements in technology. Each MSD team
participates for two semesters with one semester of overlap. The
phase I team successfully converted the vehicle to remote control.
The phase II team has been making the electronics that control the
vehicle more robust and has been adding sensors to the vehicle.
A phase III team has just begun. As the project approaches the
1.5 year mid-point, this paper addresses several key issues in
adding sensors and transition of knowledge from one team to the
next.
For each team, the first semester is a planning and design phase
and the second semester is a build and demonstration phase. The
design portion of MSD is comprised of designating team roles,
idea brainstorming, concept selection, and detailed system,
subsystem, and component design. The first semester is split into
multiple three-week cycles with documentation and design
checkpoints along the way. For the build phase, the detailed
design created in the first half of MSD is fabricated and evaluated.
During the build phase, there are regularly scheduled reviews
with the customer to show the current status of the product and
demonstrate functionality that has been achieved. The intent of
MSD is to give graduating seniors real world design experience,
using structured design process, in addition to their required
internships as well as valuable insight from seasoned engineers.
The efforts of the phase I team have recently been presented [18].
The phase II senior design team for this project is comprised of
three electrical engineers, two computer engineers, one
mechanical engineer and one industrial engineer. This team is
working with an additional group of three computer engineers.
This group of seniors is responsible for the implementation of
sensors to the cart and writing algorithms that enable autonomous
driving of the cart under controlled conditions. This includes
writing algorithms that utilize sensors mounted on the cart to
navigate a pre-planned route while avoiding objects, manage
electrical interfaces, user interfaces, and control systems.
Year three efforts will concentrate on improving localization,
navigation, obstacle avoidance and overall functionality of the
cart. Localization, or the exact determination of vehicle location
and pose, will use a combination of extended Kalman filters and
particle filters using measurements from GPS and sensor readings
[12,13]. The year three effort will include object classification
and tracking. Object classification will use advanced machine
learning, borrowing concepts from the fields of manifold
learning, sparse representations [14], as well as deep learning
[15]. Tracking will use locally developed methods that borrow
concepts from state-of-the-art trackers, such as TrackingLearning-Detection [16] and Multiple Instance Learning [17].
3. DESIGN
Once the systems design was completed and agreed upon, a
systems architecture was defined that consisted of the following
subsystems; throttle, braking, steering, path and obstacle
detection, and path planning.
3.1 Controls
The modification of the golf cart, to make it remote control was
described by [18]. The brake, throttle, and steering are
summarized here for completeness.
The cart’s braking system was modified so that the brakes would
work in both manual and autonomous modes. Modification was
performed by utilizing a linear actuator that is attached to the
brake pedal via a steel cable that can pull the brake pedal
down. This solution was chosen so that in case of an emergency
the brake pedal would be able to be pressed by a passenger to
manually stop the vehicle. In order to control the brake actuator
a Sabertooth R/C Regenerative Dual Channel Motor Controller
was driven by an Arduino Due microcontroller via a 1ms to 2ms
pulse width modulation signal. A 2ms signal is used to retract the
actuator or brake, a 1.5ms signal is used to make the actuator not
move, and a 1ms signal is used to extend the actuator.
To throttle of the cart is controlled via an analog signal being fed
to the stock motor controller box. This analog signal is controlled
by a 5K potentiometer connected to the gas pedal. The input to
the controller of the cart was redirected from the throttle
potentiometer to the analog to digital convertor (ADC) on the
Arduino Due. A dummy 5K  resistor was added to emulate the
throttle potentiometer in the situation where the power and
ground to the potentiometer were disconnected. The voltage
required to move the cart ranges from 0V to 3.3V, where 3.3V is
the top speed of the cart of about 12mph. The voltage applied to
the stock motor controller box is proportional to the speed of the
cart.
The steering system was augmented with an automotive grade
WickedBilt electric power steering system. This system allows
passenger or motor control of the steering column. Differential
signals generated by the Arduino Due microcontroller feed the
WickedBilt controller box for left and right steering. In addition
to controller the steering, the controller box has torque feedback
which indicates if a passenger is attempting to turn the steering
wheel. A potentiometer is connected to the steering column via a
drive chain to determine steering position for interactive control
code. The steering control code for future teams will rely on two
primary variables from the autonomous drive logic: desired
steering angle and urgency of the request. The urgency of the
request will be directly related to rotational velocity thresholds of
the steering input and the anticipated lateral acceleration
experienced as a result of the steering input.
To establish autonomy, the cart will need to be able to navigate
around its surroundings. To navigate there were multiple sensors
added to the cart. Each one of the sensors was picked to provide
a different range, ensuring that the cart’s vision is reliable as
possible. All of the sensors chosen had to meet IP67 standards
since the cart will be operated outside in all weather
conditions. The main sensor that was chosen was the Velodyne
VLP-16 PUCK. The Velodyne PUCK is a LiDAR that is used
for real-time 3D distance measuring. The PUCK has the ability
to support 16 channels and can measure 300,000 points per
second. The VLP-16 has a range of 100m with an accuracy of
3cm, a horizontal field of view of 360 degrees, and a vertical field
of view of 30 degrees. This sensor will be primarily used for two
main functions: obstacle detection and path planning.
In order to perform more robust obstacle detection, two high
definition Hikvision DS-2CD2032-I HD 3MP IR stereo cameras
are mounted on the front of the cart. These cameras augment the
LiDAR, enabling more accurate object identification, and help
identify paths, roads, grass using color information. In low light
or night driving, built-in IR illumination is automatically
activated.
To more reliably detect objects directly in front of the cart, three
MB7001 LV-MaxSonar-WR1 ultrasonic sensors were added to
the front. Ultrasonic sensors use sound instead of light to measure
distance. This is important because LiDAR cannot detect
windows or other non-reflective surfaces. To get the data from the
ultrasonic sensors code provided by Maxbotix was utilized to
convert the analog input to usable measurements on the Arduino.
The MB7001 ultrasonic sensors have a range of 6.45m and unlike
most ultrasonic sensors the field of view is beam instead of a cone.
This allows the placement of the sensors to be closer to the ground
to detect objects such as curbs.
To help give the cart a sense of directionality and the ability to
localize, a EM-506 GPS unit was added. The EM-506 uses
the National Electrical Manufacturers Association (NEMA)
protocol to communicate. NEMA sentences are translated into
decimal coordinates on the Ardiuno Due microcontroller. Using
these coordinates, it is possible to find distance between points,
and the desired heading of the cart.
A Lilliput FA1011-NP/C 10.1 inch touch screen interface
provides constant feedback to passengers. The interface will
display a map of the carts surroundings and show the direction
and heading of the cart as it moves about the campus.
Additionally, when used in debug mode, diagnostic information
of all sensors will be displayed continuously. Future teams may
use this touch screen interface along with voice recognition to
interact with passengers.
3.2 Software Design and ROS
The software controls for receiving and interpreting sensor data
and controlling the systems on the People Mover are designed
using Robot Operating System (ROS). While ROS is not an actual
operating system, it is a network of programs and data registers
used for the development of robotic control systems. A ROS
network is built on topics and nodes. A ROS topic is a publicly
accessible data register used by ROS nodes for their specific
function. A ROS node contains programs that execute when the
data has been updated in specific ROS topics.
In this case, the ROS network enables data to be received and
processed from different types of sensors, develops an
environment map of the immediate surrounding of the people
mover, navigates a safe path to drive using that environment map,
and finally sends the appropriate control signals to the hardware
systems of the People Mover. There is an extensive ROS support
community where many open source ROS packages already exist
that have the functionality that the control system needs. The
Velodyne ROS package contains the ROS nodes necessary to
receive and format data from the Velodyne LiDAR VLP-16,
which is arguably the most important and the most complex
sensor to be installed on the People Mover. Using this ROS
package significantly cuts down on coding and debugging time
with the integration of the LiDAR. Additional existing supported
ROS packages are currently being investigated for use in the
People Mover ROS network, including the navigation package
for its widely utilized map building and path planning nodes.
Figure 1: Complete People Mover Software System Flow Diagram
3.3 Processing
The cart required an additional processor to be added for the
purpose of receiving input from all of the sensors, making control
decisions, and sending control signals to the cart’s other
subsystems. The processor chosen needed to meet several
requirements including power consumption and processing
power. The processor also had to include Ethernet and TCP/IP
functionality for both the LiDAR and the video cameras, USB
functionality for communication with the other subsystems, as
well as a video interface for future usage. These requirements
restricted the choice to either high-performance microcontrollers
or consumer-level desktop processors. The solution chosen was a
desktop PC due to the ease of integration and wealth of existing
software drivers for the chosen sensors.
The computer that was chosen was generously provided by the
Department of Computer Engineering at RIT. The computer uses
an AMD A10-7850K processor with a speed of 3.7 GHz and four
CPU cores. Graphics processing is included in the CPU using
AMD Radeon R7 Series integrated graphics. Additional hardware
includes a 120GB solid state drive, 16GB of memory, and
included Wi-Fi. The motherboard provides USB 3.0 functionality
as well as HDMI. For an operating system, Ubuntu 14.04 LTS
was installed on the computer since it was the most familiar
environment for the team members as well as having great
support for ROS.
The only modification that was necessary was to replace the
power supply unit, as it operated off of 120/250V AC power not
the 12V or 48V that was available on the cart. To solve this
problem several solutions were identified, including a 12V DC
ATX power supply, a 48V DC ATX power supply, and a DC-AC
inverter. In the end, an additional 12V ATX power supply was
chosen. This solution was chosen over several other options as it
was the most cost effective and simplest.
3.4 Power
The cart is powered by a 48 volt battery bank which is reduced to
12, 5, or 3.3 volts as needed. Between the sensors and the desktop,
202 watts are required at 12 volts, with the desktop requiring 145
watts. This works out to a current draw of 17.12 amps. The CUI
Inc. VFK600 Series 48V to 12V DC-DC converter provides 50
amps at 12 volts, more than enough to run the existing equipment
with room to expand in the future. The cart’s 48 volt battery bank
is estimated to have a battery life of 1.57 hours, or 94 minutes
under normal use with the current selection of sensors and
compute power.
Figure 2: Phase II Overall Power Diagram
3.5 Wiring
Each of the sensors added to the cart influence the need for
additional components. Figures 3-5 illustrate the connections and
wiring for communication as well as the distribution of power.
The 48V input for the DC-DC converter is from the existing
battery bank on the cart. The 12V output of the converter is
dispersed using five connection terminal blocks and powers the
LiDAR interface box, the touch screen display, both HD cameras
in the front of the cart, the desktop in the rear of the cart, and the
Ethernet switch box. The Ethernet switch box relays the data from
the LiDARs interface box, and both HD cameras to the desktop
via Ethernet communication. The touch screen display has a
HDMI output that splits into a HDMI and USB input to the
desktop. The three high quality ultrasonic sensors, and GPS unit
are powered by a remote Arduino Uno's 5V output which is
distributed by a 5 connection terminal block. The analog outputs
of the three ultrasonic sensors are outputted to the Arduino Uno's
analog inputs, and the third ultrasonic sensor has an additional
digital output that connects to the Arduino Uno's digital input.
The GPS unit has a TX and RX connection to the Arduino Uno's
TX and RX inputs.
Figure 3: Phase II Overall Wiring Diagram
The placement of each sensor on the cart is shown in Figure 4.
The three ultrasonics are at the front of the cart and are powered
by, and communicate with the Arduino Uno in the front of the
cart. The Arduino Uno also communicates with the GPS units TX
and RX. The ultrasonic signals are collected by the Arduino Uno
at the front of the cart as to not make the analog signals travel the
length of the cart to the back control box, and possibly become
distorted during transmission.
GPS
LIDAR
Stereo
Vision
Control Box
PC
E-Stop
Ultrasonic
Figure 4: Sensor Placement on Cart
The Arduino Uno then relays digital signals directly to the
desktop via USB. The 12V output from the 48V to 12V DC-DC
converter is run to the 5 connection terminal blocks in the front of
the cart to be distributed to save wire and lower costs. The
terminal block powers the HD cameras, LiDAR, touch screen
display, and Ethernet switch in the front of the cart and the
desktop in the rear of the cart. The touch screen display has a
HDMI output that splits into a HDMI and USB input to the
desktop. The power wiring will be distributed along the right side
of the cart and the left side of the cart houses the connections for
communication between the sensors and components with the
desktop.
Figure 5: Phase II Overall Wiring Layout
The people mover utilizes three Arduino Due microcontrollers at
the back of the cart. The three Arduinos are the main Arduino
Due, the throttle Arduino Due, and the steering Arduino Due,
each of which are connected to one another via digital IO lines.
All Arduinos communicate with the desktop via USB
connections.
Figure 6: Phase I & II Overlay Wiring Layout
The prototype printed circuit board from the Phase I team was
redesigned by D3 Engineering, an engineering company based in
Rochester, NY. In addition to updating the electronic controls, D3
designed a custom shield board that allowed each of the three
Arduino Due’s to be plugged directly into the circuit board.
3.6 Mounting
The three primary mounting challenges present for the
autonomous people mover are the Velodyne LiDAR, the PC, and
the Stereo Cameras. The Velodyne LiDAR is mounted to the roof
of the cart. This mount as shown in Figure 7 is elevated 4 and ½
inches above the roof such that the LiDARs lowest of its 16 beams
(pointed 15° below horizontal) clear the edges of the roof.
Figure 7: Preliminary LiDAR Mount
The PC is mounted in a hanging rectangular frame that lies just
below the primary control box of the cart as shown in Figure 8 .
It is mounted to the bottom of the same crossbars that the control
box sits on top of.
Figure 8: PC Mount
The stereo camera mount as shown in Figure 9 is an angular base
that sits on the bottom section of the roof frame just in front of the
driver and passenger seat. The angular base compensates for the
rake of the roof frame itself and insures that the stereo cameras
are pointed perfectly forward. The centers of each camera base
are 100mm apart, ideal for stereo camera calibration.
Figure 9: Stereo Camera Mount
4. DISCUSSION AND FUTURE WORK
This paper summarized key findings from a two-semester
multidisciplinary capstone project, where the particular capstone
project was the second of three consecutive one year long
projects. Each capstone project will build upon the learnings,
successes, and failures of the previous project(s). This second
phase will result in an autonomous driving platform by making
the necessary mechanical and electrical modifications to the cart.
Although sound engineering principles were followed, the
execution of the project to date has had its fair share of problems.
For example, most tasks took longer than expected and many
small time slippages often turned into larger schedule
problems. Further, it proved difficult to understand and in turn,
integrate with the Phase I systems, which delayed the
implementation of the sensors. The team learned the value of
methodical troubleshooting, noting that even the simplest tasks
can be difficult. In addition, the team learned how to split up the
work amongst one another and work together to get as much done
as possible in the short time frame. Because the team was
building upon the Phase I system while developing a foundation
for autonomy from which future teams will expand upon, there
was additional pressure to ensure perfection and thorough
documentation throughout each step in the process.
During the next phase of this multi-year project, the cart will be
able to drive autonomously under natural conditions. This Phase
III team is currently researching optimal algorithms to perform
navigation and obstacle avoidance. This phase will later develop
more complicated control systems to allow the cart to navigate
anywhere on campus, track surrounding objects, and make
decisions as to the quickest route through pedestrian filled
walkways.
5. CONCLUSION
Multidisciplinary senior design capstone projects provide
students with a unique opportunity to experience all aspects of the
product life cycle including customer interaction, customer
requirements, industry research, product cost, product risk,
schedule management, and product deployment. To provide
sound experiential learning for senior engineering students and to
facilitate future autonomous driving research, an autonomous
senior design project has been created. Self-controlled vehicles
are important to the automotive industry due to the increased
safety benefits of removing the human factor from driving. This
technology will help to make commute times shorter and decrease
the likelihood of accidents. The second step in designing an
unmanned vehicle was to take an electric golf-cart, add sensors
and program it to navigate a preplanned route. The team added a
state-of-the-art LiDAR, three high quality ultrasonic sensors, two
infrared (IR) cameras, a touch screen, and a desktop processing
computer to the cart. The processing computer will take in data
from all the sensors, cameras, and the touch screen to analyze the
data and send signals to the Arduinos, which will then control the
brake, throttle and steering and allow the cart to drive on its own.
The power system required the generation of 12V, 5V, and 3.3V
to power the electronics, sensors and other systems. The team
gained real-world experience on how to satisfy customer needs
while staying within budget and on schedule. This project laid the
foundation of autonomy for the next senior design team to expand
upon and make a truly autonomous people mover. The final
autonomous cart will also serve as a multidisciplinary platform
for further research into all areas of autonomous vehicles.
6. REFERENCES
[1] IHS Automotive, “Emerging Technologies: Autonomous
Cars- Not If, But When,” IHS Automotive study,
http://press.ihs.com/press-release/automotive/self-driving-carsmoving-industrys-drivers-seat, Jan 2, 2014.
[2] Tannert, Chuck. “Will You Ever be Able to Afford a selfDriving Car?,” www.fastcompany.com, 2014.
[3] Petri, Tom, US Chairman of the Subcommittee on Highways
and Transit- Hearing on “How Autonomous Vehicles will Shape
the Future of Surface Transportation,” Nov 19, 2013.
[4] 2 Annual Willaim P. Eno Paper, “Preparing a Nation for
Autonomous Vehicles”, 2013.
[5] Thrun, Sebastian, “Toward Robotic Cars”, Communications
of the ACM, Vol. 53 No. 4, pp. 99-106, 2010.
[6] Levinson, Jesse, et al. "Towards fully autonomous driving:
Systems and algorithms." Intelligent Vehicles Symposium (IV),
2011 IEEE. IEEE, 2011.
[7] U.S. Department of Transportation Awards $63 Million in
University
Transportation
Cener
Grants,
http://www.rita.dot.gov/utc/press_releases/utc01_13, 2013.
[8] Josh Hicks, et al. Senior Engineering Design Report: GPS
Autonomous Drive-By-Wire Go-Kart.
Department of Electrical Engineering, Saginaw Valley State
University, 2006.
[9]"Semi-Autonomy for Unmanned Ground Vehicles." MIT
TechTV – Collection (3 Videos). MIT, 2012. Web. 28 Feb. 2015.
[10]"SMART News - Permanent Secretary Hails 'fantastic'
Driverless Car Ride." SMART News - Permanent Secretary Hails
'fantastic' Driverless Car Ride. SMART Singapore-MIT Alliance
for Research and Technology, 2013. Web. 28 Feb. 2015.
[11]"Team Case : Vehicles." Team Case : Vehicles. Case Western
University, 2007. Web. 28 Feb. 2015.
[12] Urmson, Chris et al., “Autonomous Driving in Urban
Environments: Boss and the Urban Challenge,” Journal of Field
Robotics, Volume 25, Issue 8, 2008.
[13] Ershen, W., Z. Weiping, and C. Ming. “Research on
Improving Accuracy of GPS Positioning Based on Particle
Filter,” in IEEE 8th Conference on Industrial Electronics and
Applications (ICIEA 2013), 2013.
[14] R.W. Ptucha, A. Savakis, “LGE-KSVD: Robust Sparse
Representation Classification”, IEEE Transactions on Image
Processing, Volume 23, Issue 4, 2014.
[15] G. E. Hinton, S. Osindero, and T. Yee-Whye, "A fast
learning algorithm for deep belief nets," Neural Computation,
vol. 18, pp. 1527-54, 07/ 2006.
[16] Z. Kalal, J. Matas, and K. Mikolajczyk, “P-N learning:
Bootstrapping binary classifiers by structural constraints,”
CVPR, 2010.
[17] B. Babenko, M.-H. Yang and S. Belongie, "Robust Object
Tracking with Online Multiple Instance Learning," IEEE Trans.
PAMI, 2011.
[18] K. Knowles, N. Bovee, P. Gelose, D. Le, K. Martin, M.
Pressman, J. Zimmerman, R. Lux, R. Ptucha, “Autonomous
People Mover,” Proceedings of American Society for
Engineering Education, Syracuse, NY, 2015.
nd