B11 - 193 - University of Pittsburgh

B11
193
Disclaimer - This paper partially fulfills a writing requirement for first year (freshman) engineering students at the University of
Pittsburgh Swanson School of Engineering. This paper is a student, not a professional, paper. This paper is based on publicly
available information and may not provide complete analyses of all relevant data. If this paper is used for any purpose other than
these authors’ partial fulfillment of a writing requirement for first year (freshman) engineering students at the University of
Pittsburgh Swanson School of Engineering, the user does so at his or her own risk.
THE IMPORTANCE AND FUTURE OF LIDAR TECHNOLOGY IN
AUTONOMOUS VEHICLES
Garrett Hagen ([email protected], Vidic 2:00), Daniel Mingey ([email protected], Vidic 2:00)
Abstract- Once considered a futuristic fantasy, the
autonomous vehicle is on the brink of emerging into
mainstream society and has become a centerpiece for
technological innovation. This world-changing innovation can
be accredited to the recent breakthroughs in sensing and
processing technology, specifically LiDAR detection. LiDAR
is an acronym for Light Detection and Ranging, and is a type
of detection used to precisely map out large surrounding
areas. This paper will explore the importance of LiDAR
sensing in autonomous vehicles and why it is essential to the
precise, real-time depiction of the vehicle’s surroundings
which is the first step in achieving full autonomy. Due to
LiDAR's reliability and proficiency, it's implementation on a
vehicle is crucial for it to be considered fully autonomous. As
LiDAR technology continues to rapidly develop, these sensors
are becoming much more affordable, making them an
attractive piece of equipment to all vehicle manufacturers.
With driverless cars continuing to remain in the spotlight as a
ground breaking innovation that is sweeping the world, the
implementation of LiDAR systems will become a universal
standard due to their role in creating the safest vehicle
possible. Once society can achieve a fully-autonomous future
with LiDAR-equipped vehicles, quality of life will increase due
to a safer, less strenuous, and more environmentally conscious
transportation system. With this technology being refined
every day, many advancements are expected to be made in
short time such as the transition to completely solid state
LiDAR.
since the pulse traveled to the object and back. LiDAR-UK,
an informative website dedicated to LiDAR research, explains
that LiDAR instruments fire rapid pulses of laser light at a
surface, some at up to 150,000 pulses per second [1]. These
pulses reflect from off of surrounding objects and are
processed to provide detailed information on the x, y, z
coordinates, and reflectivity of a specific point on the object.
This process is repeated in rapid succession, often using
multiple lasers at the same time to gather enough data points
to create a detailed map of the surrounding environment.
THE HISTORY OF LIDAR
Developed in the 1960’s, single-laser mechanical
mechanism LiDAR was used primarily by NASA for
atmospheric research and space exploration. Frost and
Sullivan, a reputable market research firm, recalls that,
“LiDAR developed slowly for 30 years, but by the mid-1990s,
laser scanner manufacturers were delivering LiDAR sensors
capable of 2,000 to 25,000 pulses per second to commercial
customers for topographic mapping applications” [2]. They
also share the story of how the ground breaking innovation of
Solid-State Hybrid LiDAR (SH LiDAR) was developed in
2006 by the current founder and CEO of Velodyne, David
Hall. The beginnings of SH LiDAR, and consequently, the
autonomous car industry happened in 2006 at the Grand
Challenge Races of Robotic Vehicles. For the first time,
multiple lasers were bundled into one unit, with an array of
detectors to gather abundant data for navigation purposes,
working in real-time from a moving vehicle [2]. The
implementation of solid-state technology into LiDAR allowed
for multiple lasers, which was previously near-impossible with
an all mechanical system. The solid-state design allowed the
entire sensor to be compacted into a small part, which can be
spun quickly to retrieve a 360-degree horizontal view and a 30degree vertical view of the environment, creating 1.2 million
data points per second. The 3D maps processed from these
multi-laser sensors are extremely accurate and can be created
in real time, unlike previous LiDAR systems, whose single
laser mechanisms are incapable of collecting sufficient
amounts of data quickly enough to use as sensory input for a
moving vehicle. This breakthrough innovation was patented
under Velodyne and developed into their flagship 64-Laser
rotating SH LiDAR sensor which is used in most autonomous
Key Words—Autonomous Driving, Computer Engineering,
LiDAR, Self-Driving Vehicles, Solid-State LiDAR, Tesla,
UBER,
OVERVIEW OF LIDAR
Light Detection and Ranging (LiDAR) is a type of advanced
sensing technology that surveys a surrounding environment
with great precision and resolution. In a general sense, the
principle of LiDAR is to send a laser pulse at a target and
measure the time it takes for the pulse to deflect off of the
target and return to the sensor. The distance between the sensor
and a point on the surrounding object is then calculated by
taking the time elapsed from the pulse’s departure to return and
multiplying it by the speed of light and then dividing by two,
University of Pittsburgh Swanson School of Engineering
3.31.2017
1
Garrett Hagen
Daniel Mingey
vehicles currently on the road today.
[1]. Although the mirror is rotating and tilting rapidly, the
speed of light moves at nearly 300 million meters per second
so each laser pulse (group of photons) is able travel to the
surrounding object and back before the mirror has physically
moved. When the return pulse strikes the mirror, it is redirected
to the receiving photodetector that is located right next to the
emitter. The type of photodetector typically used in SH LiDAR
is called a photodiode which is a semiconductor device that
converts the photons within the returning pulse to an electric
current when it absorbs them [2]. The current generated by the
photodiode is then then sent to the vehicles computer to be
interpreted.
What is a Multi-Laser LiDAR System?
When it is stated that a LiDAR sensor is a multi-laser or
multi-channel system, it is indicating that there are multiple
laser emitters and receivers being used simultaneously in the
same unit. For example, Velodyne’s SH LiDAR sensor has 64
lasers, which indicates that there are 64 individual laser
emitters, each firing hundreds of thousands of pulses per
second to be received by one of the 64 optical receivers. Each
individual laser emitter rapidly emits 600-1000 nanometer
laser pulses which is the wavelength that is traditionally used
in commercial application, as it can easily be easily absorbed
by the human eye at low power.
Processing LiDAR Data
In order to create a detailed map of the surrounding
environment, the LiDAR sensor must detect millions of points
per second through multiple laser emitters and their respective
photodiode receivers. When it is said that the laser pulse
“returns” with information, what is actually meant is that the
photons within each return pulse are converted into an electric
current that can be computationally interpreted by an
algorithm. Using flight time and the speed of light to calculate
the exact distance the pulse traveled, the 3D coordinates of the
specific point on an object can be determined. The return
intensity of the pulse directly correlates to the strength of the
electric current that is generated and is used to determine the
reflectivity of the objects surface. A greater return intensity
correlates to a more reflective surface and a weaker return
intensity indicates a less reflective surface. The reflectivity
measurement is very useful in determining objects such as road
signs and lane markings, or distinguishing the difference
between similarly shaped objects such as a tree and traffic light
pole.
BREAKDOWN OF A SOLID STATE
HYBRID LIDAR SENSOR
FIGURE 1 [2]
An illustrated diagram of a Solid State Hybrid LiDAR
System
Vehicle Application of Solid State Hybrid LiDAR
The current method of integrating SH LiDAR into an
autonomous vehicle is to mount rotating sensor on the roof of
the car. Although many consider this aesthetically unpleasing,
it provides minimal obstruction and allows for multiple
viewing points (MPV). Through advanced 3D software, the
vehicle’s computer is able to create vantage points from nearly
anywhere in the surrounding environment. Essentially, the
software acts very similar to a “change view” feature in a video
game. In a typical racing game, a player can race their car from
different viewpoints, such as overhead, behind the vehicle,
first person, reverse view, or nearly anywhere in the virtual
environment. MPV works in an extremely similar sense,
except the “virtual world” is actually the real world being
scanned in real time. Instead of having to manually change
view like a player would in a video game, the vehicle’s
computer simultaneously analyzes data from different
viewpoints to make more informed decisions.
With the extreme competition of autonomous car
companies aiming to be the first to market, the reliable
Distribution of Laser Pulses
Shown in Fig. 1 is an example of the path a single laser pulse
takes in and out of a rotating LiDAR sensor. Each laser emitter
successively fires hundreds of thousands of laser pulses
upwards at a mirror that is simultaneously rotating and tilting
in order to scatter the pulses across the environment. The
tilting aspect of the mirror allows for the lasers to be scattered
in a 30-degree vertical range while the rotating aspect of the
mirror essentially spins this 30-degree range in a full circle to
create complete coverage of the surrounding environment.
Harvesting the Returning Laser Pulses
Each laser pulse emitted consists of roughly one billion
photons, where approximately one thousand of these photons
actually strike an object and reflect back to the receiving lens
2
Garrett Hagen
Daniel Mingey
overhead orientation of a rotating LiDAR sensor shown in Fig.
1 is most likely what the auto industry will rely on for the
coming years. With the rapidly developing Micro ElectroMechanical Systems (MEMS) LiDAR, which is explained in a
later section, it is very possible that the autonomous auto
industry will eventually ditch the overhead rotating sensor for
this cheaper, more aesthetically conscious LiDAR. As of now,
the current range limitations of MEMS LiDAR would impose
the need for multiple sensors to be distributed around the body
of the vehicle, which is highly impractical during the
development of the autonomous vehicle in general.
inputted data and the data sets in order to make decisions.
However, when a camera encounters a pixel orientation that
the computer’s software is not familiar with, the vehicle does
not know what to do. Essentially, this was a large reason for
the fatality involved with Tesla’s Autopilot system.
LiDAR in Snow
One of the biggest challenges autonomous vehicles face is
battling the elements, especially snow. LiDAR plays a crucial
role in navigating through these conditions. Quartz Media, a
technology focused news site, interviewed an engineer at Ford
and explained the technology by writing, “Laser goes through
the rain or snow, part of it will hit a raindrop or snowflake, and
the other part will likely be diverted towards the ground. The
algorithm, by listening to the echoes from the diverted lasers,
builds up a picture of the “ground plane” as a result, said Jim
McBride, a technical leader for autonomous vehicles at Ford.”
He also explains to Quartz, “If you record not just the first
thing your laser hits, but subsequent things, including the last
thing, you can reconstruct a whole ground plane behind what
you’re seeing, and you can infer that a snowflake is a
snowflake” [3]. They also explain that the algorithm checks for
the persistence of a particular obstacle because a laser beam is
unlikely to hit a raindrop twice, so it could rule out that
raindrop as an actual obstacle [3].
A common challenge faced by autonomous vehicles is
determining road position in heavy winter conditions. Despite
the fact that lane markings and other indications may be
invisible due to the snow, LiDAR and 3D mapping can assist
in generating a course of action by surveying the current
environment and comparing it to a previously generated 3D
map. For example, in heavy snow the vehicle would use
LiDAR to detect its distance from a surrounding object such as
a stop sign. It would then compare that distance measurement
to the distance measurement between said stop sign and the
lane marking in a 3D map that was previously generated during
clear conditions. By doing so, the vehicle knows where the
lane marking is, despite its current invisibility. What if the car
fish tails or starts sliding out of control on a snow covered
road? David Price, a mechanical engineer at UBER, explained
that these types of systems are actually safer and more reliable
for crash avoidance than a human would be in this scenario.
He explained that the system acts similarly to a standard
traction control assistance function in a vehicle, but relies on a
computer instead of a human to make split-second decisions
on turning and braking [4]. By utilizing algorithms similar to
those needed in detecting invisible lane markings, the vehicle
collects distance measurements through LiDAR sensing and
devises an appropriate plan needed to avoid sliding off of the
road or into an obstacle.
The Ultimate Sensor
Since the spark of the autonomous vehicle, companies have
relied heavily on LiDAR as a crucial component to the sensor
suite that is implemented into a vehicle. A standout quality that
separates LiDAR from its companion sensors is its ability to
“see” regardless of external lighting conditions. These types of
sensors that create their own source of light energy are called
active sensors [2]. In other words, the laser that a LiDAR
sensor emits is completely independent of external lighting
whereas humans and cameras need optimal external lighting in
order to perceive a clear, bright image. This offers an
unmatched reliability and consistency in data that is collected
from a LiDAR sensor in comparison to cameras that can easily
be fooled by “optical illusions” and unfamiliar lighting
situations.
LiDAR is Computationally Friendly
Because computer systems in autonomous vehicles are
being operated from a mobile setting where space and power
supply is limited, sensing that requires minimal computation is
of the essence. This is another area that LiDAR excels in. At
first, it may seem that collecting millions of surrounding point
coordinates and reflectivity measurements and stringing them
together to make a 3D map would be extremely
computationally demanding, however, this is not necessarily
the case. The cameras that are currently being used in most
autonomous vehicles produce very high resolution pictures
which contain millions of pixels per image. In order for the
vehicle’s computer to make sense of these images, it must
analyze each pixel and use extremely complex and refined
software to deduce useful information from the data it gathers,
This takes much more computational power than processing
LiDAR’s four measurements of x,y,z location and reflectivity.
In addition to the extreme computational abilities required
to breakdown a rapid stream of multi-million pixel images,
software must account for every situation that could ever arise
when driving in order to be completely camera dependent – a
nearly impossible task. A form of what is known as “machinelearning” that is commonly found in driverless vehicles
involves pre-loading data sets that contain thousands of images
similar to what a vehicle might encounter in real life and the
vehicle’s computer tries to analyze pixel patterns between
What is Sensory Overlay and Reliability?
3
Garrett Hagen
Daniel Mingey
Perhaps the most important concept in regard to the sensor
roles in autonomous vehicles is sensory overlay. Simply put,
sensory overlay is having multiple sensors contributing to the
stream of data that is needed for the vehicle’s computer to
make crucial decisions in the event that one or more sensors
fails. A sensor’s spot in the priority of data ranking changes
depending on the external conditions and as displayed in Fig.
1, certain sensors are much more reliable than others in some
conditions/situations.
The National Highway Traffic Safety Administration
(NHTSA) separates autonomous vehicles into 5 levels starting
with level 1 (assistive features) to level 5 (completely
driverless) as shown in Figure 2.
FIGURE 3 [2]
NHTSA description for the five levels of autonomous
vehicles
What society has labeled as the “driverless car” (ex. Uber,
Tesla, and Google) is currently operating at a level three. These
vehicles are classified as level three due to the fact that they
still often require a human driver to intervene when something
goes wrong. Google and Uber believe that LiDAR is an
essential component in order to achieve level 5 autonomy
mainly due to its ability to provide data that is consistent and
is not susceptible to lighting illusions. Frost and Sullivan
elaborates on the topic by declaring, “For a robotic technology
to intervene and make an important decision such as to brake,
it has to be accurately informed by the 3D visualization system
in real-time. False positives, which are abundant in
photographic-based sensor system, are unacceptable. The
technology for level 3 and 4 requires brake functioning,
steering, speed, and all other controls in place that are needed
for autonomy” [2].
FIGURE 2 [2]
Sensor strengths in certain conditions.
LiDAR thrives in many scenarios such as poor lighting,
long range object detection, sensitivity to light, and 3D object
detection. Because of this wide array of strengths, LiDAR is
the lead sensor in many situations, but it can also serve as a
reliable backup in most scenarios if need be. In the event that
two sensors’ input portray conflicting data, the input from the
lead sensor is usually prioritized. However, if LiDAR detects
something that the lead sensor does not, the vehicle will
usually play it safe and acknowledge the LiDAR data, as it is
not fooled by lighting illusions. It is always better to react to a
false positive and brake when unneeded, rather than to ignore
it and potentially crash.
Why Tesla Needs to Consider LiDAR
Opposed to Google and Uber in the controversial debate on
how to build the autonomous car, Tesla has been notorious for
its stance that LiDAR is an unnecessary component to the
sensor suite in autonomous vehicles. In May of 2016, the first
fatality from an autonomous vehicle occurred when a Tesla
Model S owner was using the Autopilot feature on a Florida
highway. When a tractor trailer made a left turn across the
highway, the Model S failed to recognize the tractor trailer and
drove underneath it, shearing the roof of the car off and killing
the driver. Los Angeles Times noted, “Not long after the crash,
Tesla Motors Inc. Chief Executive Elon Musk speculated that
LiDAR is Necessary to Achieve Full Autonomy
4
Garrett Hagen
Daniel Mingey
the Autopilot system might not have functioned properly
because it could not isolate the image of the trailer from the
bright sky behind it. The system’s radar, Musk said, “tunes out
what looks like an overhead road sign to avoid false braking
events” [5].
Tesla placed the fault of the crash on the driver since
Autopilot is still in beta and requires the driver to keep their
hands on the wheel at all times; however, this fatality could
have been avoided in one of two ways: better software or
LiDAR. Because Tesla relies solely on cameras and radar, the
first solution is to build better software that would have
accounted for a situation like this and applied the brakes
instead of assuming the trailer was an overhead sign. It is easy
to say this in hindsight, but the problem with relying on
software to compensate for sensor failure is the amount of odd
scenarios like this that have not been considered and will never
be considered until they actually happen.
The second and most sensible solution is to implement a
LiDAR sensor. Frost and Sullivan explains that, “Cameras are
high-res only in 2D, but are quite poor in the third dimension,
and traditional radar has a very limited spatial resolution.
LiDAR, by contrast, can not only be used to identify objects
around the car but also classify them, distinguishing
pedestrians, bicycles, and other vehicles. LiDAR creates vivid
maps, versus the abstract spots radar produces” [2]. In this
particular situation, the radar in the Tesla knew there was an
object in front of the car, but could not distinguish it as a trailer
instead of an overhead sign. A LiDAR sensor would have been
able to determine the difference with ease because LiDAR
determines the exact 3D shape of an object whereas radar just
knows there is an object there. Tesla has always resented SHLiDAR due to its expensive nature and ugly aesthetics,
however with the developing future of various LiDAR
technologies, it is very likely that Tesla and other autonomous
vehicle companies will eventually implement LiDAR if their
ultimate goal is to reach level five autonomy.
universities like the Massachusetts Institute of Technology
(MIT), are aiming to solve these obstacles in order for selfdriving vehicles to have a successful future.
Solid-State and Consumer Friendly LiDAR
Velodyne LiDAR is used by most autonomous car research
teams and companies. Due to an increase in the mass
production of Velodyne’s SH LiDAR, the costs are expected
to drop drastically in a short time. From 2017 to 2020,
Velodyne expects their prices of LiDAR to decrease by 90%
from their original prices [2]. The recent growth in demand
for LiDAR products has led Velodyne to expand their facilities
and increase their hiring to accommodate larger production.
This will contribute to the future affordable price of LiDAR
for producers and consumers.
FIGURE 5 [6]
A dimensioned image of the
Velodyne VLP-16 Puck Hi-Res
One of the more recent developments made by Velodyne
LiDAR is their VLP-16 Puck Hi-Res. As shown in Fig. 5, it
gets its name from its striking resemblance to a hockey puck.
This sensor is the smallest, newest, and most advanced sensor
in Velodyne’s 3D product range. Puck Hi-Res is a 16 channel
real time 3D sensor that weighs 830 grams. Not only does it
retain the key features of Velodyne’s revolutionary LiDAR
technology such as real time mapping, 360o horizontal viewing
angles, and 3D distance, but it is also more cost effective
amongst similarly priced sensors and it is developed with mass
production in mind [6]. This means that Velodyne has
designed the product’s materials in a certain fashion that
allows for efficient mass production which leads to a
significantly smaller price. Puck Hi-Res has a range of 100
meters, a 360o horizontal field of view, and a 20o vertical field
of view [6]. The purpose for the narrower vertical field of view
is for a tighter channel distribution which allows for a more
detailed resolution of the 3D images at longer ranges. This
enhanced technology enables the host system to detect and
perceive specific objects at longer distances. Its slick and
compact size will make it an attractive piece for consumers and
a viable option for LiDAR self-driving cars in the future.
THE FUTURE OF LIDAR
FIGURE 4 [2]
Decline of pricing for LiDAR sensors
Throughout recent years, a substantial amount of academic
and industry research has gone into solving the obstacles faced
by LiDAR. Size, complexity, and cost all place a burden on
the commercialization of LiDAR. Companies such as
Velodyne and Quanergy, as well as research teams at
5
Garrett Hagen
Daniel Mingey
Another advancement in LiDAR technology comes from
MIT. The U.S. Defense Advanced Research Projects Agency
funded MIT graciously enough for researchers to leverage
silicon photonics to condense a functional LiDAR system onto
a single 0.5 by 6 millimeter chip [7]. This decrease in chip size
will lead to even smaller LiDAR systems which blend in
seamlessly with the aesthetics of a vehicle. However, this
prototype currently has a range of just 10 meters. Despite this
shortcoming, MIT has a clear development path towards a 100meter range and a per-chip cost of just $10.
Producers often times have difficulty convincing
consumers to switch from their current product of choice to a
newer option. These newer options must have great appeal to
the consumer, and must improve on all the aspects of the old
product. In the case of self-driving cars implemented with SHLiDAR, there are significant challenges engineers face when
tasked with making this technology enticing to consumers. As
demonstrated earlier, at this point in time SH-LiDAR is
necessary to achieve a fully autonomous vehicle, and society
will only receive the perks that come with self-driving vehicles
once LiDAR becomes mainstream. Size, complexity, and cost
are all substantial obstacles to the commercialization of
LiDAR self-driving vehicles. Pleasing aesthetics of a vehicle
are necessary to attract consumers. Although some people
may be intrigued when they see a large, bulky spinning device
on top of a vehicle, most will not want to purchase such a
vehicle. The future is quickly pushing towards a compact,
reliable LiDAR system to seamlessly blend in with a vehicle,
but the technology is not quite there yet. It is the job of
engineers to minimize the size of LiDAR to increase its
commercialization. Another main obstacle for LiDAR is its
steep price. Currently, most autonomous cars rely on the
HDL-64E LiDAR sensor from Velodyne. This sensor scans
2.2 million data points in its field of view per second and can
pinpoint objects up to 120 meters away [7]. While the
technology itself is impressive, it weighs more than 13
kilograms and can cost upwards of $80,000. When developing
new advanced technologies, it is common for the initial prices
to be expensive. Once LiDAR has more testing, research, and
mass production, a more acceptable price could easily be
established.
LiDAR Start Up Companies and Investments
As the market for a small, inexpensive LiDAR sensor
system grows, there is an increasing amount of investors and
startup companies entering the industry. Quanergy, a SiliconValley LiDAR company, was valued at $1.59 billion in August
2016, and closed a funding round of $90 million [7]. At the
Consumer Electronics Show in 2016, Quanergy showed off
their new solid-state LiDAR prototype designed for selfdriving cars. According to Quanergy, once this product begins
production in mass volumes, it will cost $250 and will be
available to automotive equipment manufacturers in 2017.
This LiDAR system uses optical phased array laser pulses
rather than a rotating system of mirrors, lenses, and lasers [7].
Similar to Quanergy, the startup companies Innoviz and
Innoluce are working on $100 LiDAR systems that they claim
will be released in 2018. Innoviz, a company based in Israel,
is promising a high definition solid-state LiDAR with an
improved resolution and a larger field of view than existing
sensors. Innoluce is a Dutch company who develops MicroElectrical-Mirrors systems (MEMs). The device developed by
Innoluce consists of an oval shaped mirror mounted on a bed
of silicon. The mirror is then connected to actuators that make
it oscillate from side to side, which changes the direction of the
laser beam it is reflecting [8]. This LiDAR system uses MEMs
to scan and steer the laser beam as opposed to the solid-state
method. It is important to note that while these two companies
show promising LiDAR self-driving technology for the future,
their products are prototypes. This technology contains
significant flaws that will need to be fixed before autonomous
vehicles can use this technology on the road.
Other investments in LiDAR technology are coming from
Ford and Baidu, a Chinese Internet company. Combined, they
invested $150 million in Velodyne. The ample amount of
investments into this industry all come with the same goal, to
achieve an autonomous vehicle LiDAR sensor with a price of
close to $100 within the next few years. With Velodyne being
the company paving the way for companies producing LiDAR,
these investments are crucial for the future of self-driving cars.
MEMS, and Phased Array Technology
As mentioned in the Future of LiDAR section, certain
companies have begun developing Micro-ElectricalMechanical Systems (MEMS) and Phased Array LiDAR
technology to compete with SH-LiDAR. While these
alternatives to SH LiDAR come with improvements such as a
decrease in size, weight, and cost, they face other serious
obstacles. These technologies send out smaller beams than SH
LiDAR, which increases the divergence of the beams and leads
to a limited range and field of view. This limited actuation
affects both the horizontal range and vertical range of the
sensors [2]. The horizontal range could be fixed by placing
multiple sensors surrounding the vehicle, but in order to solve
the issue of limited vertical range, the sensors would need to
be stacked on top of one another. This is not only an unfeasible
solution to the problem, but the software work required to
stack the sensors is extremely complex and expensive.
ENGINEERING CHALLENGES INVOLVING
LIDAR
6
Garrett Hagen
Daniel Mingey
need to be completely driverless meaning they do not require
a driver to be present when operating. In order for a vehicle to
be reach level 5 (fully driverless) autonomy, it must be able to
detect surroundings 300 meters away and from all sides, and
never require driver intervention, regardless of the external
situation. Currently, this is only achievable with the
implementation of LiDAR sensors, thus it is probable that the
industry will continue to utilize this technology in the
foreseeable future. Once society can achieve a fully
autonomous future, there will be an increase in quality of life
through a safer, less strenuous, and more environmentally
friendly transportation system.
Autonomous vehicles remove human error from the
equation, and would absolutely decrease the astonishing
number of 1.3 million people who die each year from car
accidents [9]. While accident reduction and the elimination of
drunk driving is considered the claim to fame of fully
autonomous vehicles, they will benefit society in an abundance
of other ways. Fully-autonomous vehicles are expected to
decrease accident rates by an estimated 90% which will result
in monumental economic effects. According to the Department
of Transportation, the official statistical value of a human life
is $9.2 million [10]. If the 1.3 million driving fatalities per year
is reduced by 90% to 13,000 fatalities, that equates to $11.84
trillion that is saved by the implementation of autonomous
vehicles. With this drastic improvement in safety, the cost of
vehicle and life insurance policies will also decrease
exponentially, allowing insurance to be attainable for many
who could not previously afford it.
Enhanced human productivity is another perk that comes
along with the sustainability of the LiDAR. Without the
burden of being responsible for operating a vehicle, drivers can
focus their attention elsewhere. Similar to how passengers on
a bus, train, or airplane can get work done on their laptops or
take phone calls safely, passengers in autonomous vehicles
will be able to be productive, without worrying about causing
an accident. Even simply doing nothing at all while in a selfdriving vehicle increases human productivity. Several studies
have shown that taking short breaks to relax your mind
improves your productivity. Therefore, doing nothing while
travelling to somewhere that requires your immediate
attention, such as work, will increase your proficiency upon
arrival. LiDAR equipped vehicles will turn burdensome
commuting time, into a time for productivity.
In regard to the environmental impact, fully-autonomous
vehicles would drastically reduce pollution and fuel
consumption by eliminating traffic congestion and
maximizing route efficiency. Traffic congestion in most cases
is caused by some sort of human error. LiDAR-equipped
autonomous vehicles are the solution to this troublesome
problem, and can play a significant role in preventing the
estimated 2.9 billion gallons of fuel that is wasted each year
[11]. However, none of this possible without LiDAR. Light
and Detection Ranging systems play a crucial role in the near
future
for
automotive
safety
and
autonomous
driving. LiDAR’s reliable perception method through 3D data
FIGURE 6 [7]
A model of MEMS LiDAR
MEMS and Phased Array LiDAR provide a frequencymodulated-continuous wave, and its purpose is to improve the
image resolution and lower the power usage [2]. However, the
continuous detection of the LiDAR’s field of view
compromises the safety, reliability, and performance of the
sensor. Sun noise, darkness, foul play, and interference with
other sensors could all jeopardize the sensors and lead to
possible vehicle stoppage and accidents.
Another issue with MEMs and Phased Array LiDAR deals
with addressing the detector system and power source. These
technologies promise a small form factor, but the issues of the
power supply storage, the laser, and detector system have not
been solved. Due to the limited actuation, it is unknown
whether the system will require a more powerful laser for
compensation. This powerful laser will cause an increase in
the power requirement and could lead to the system running
hot and overheating.
All of these obstacles faced by these technologies show that
to achieve full autonomy at this point in time, SH-LiDAR is
required. The experimental MEMs and Phased Array LiDAR
systems only address the NHTSA application of levels 1 and
2, which cover assisted driving and partially automated
vehicles [2]. They have yet to overcome the obstacles
necessary to be considered practical alternatives to SH-LiDAR
for autonomous driving or autonomous intelligence.
SUSTAINABILITY
The concept of sustainability can be interpreted in numerous
ways, but in terms of engineering it refers to the design of
products and innovations that will have a lasting positive effect
on the well-being of society. Unlike most industries, “ground
breaking advancements” in the rapidly progressing field of
technology are usually short-lived before they become outdone
by something bigger and better. While predicting what the
future entails is impossible, it is undeniable that self-driving
vehicles have an enormous potential to improve society. In
order for autonomous vehicles to be 100% effective in accident
prevention, especially in the elimination of drunk driving, they
7
Garrett Hagen
Daniel Mingey
[8] “A Breakthrough in Miniaturizing Lidars for Autonomous
Driving” The Economist. 12.24.16. Accessed 2.27.2017
http://www.economist.com/news/science-andtechnology/21712103-new-chips-will-cut-cost-laserscanning-breakthrough-miniaturising
[9] “Pittsburgh, Your Self-Driving Uber is Arriving Now”
UBER
Newsroom.
9.14.2016
Accessed
2.8.2017
https://newsroom.uber.com/pittsburgh-self-driving-uber/
[10] “Statistical Value of Life and Industries” US Department
of
Transportation
12.21.16
Accessed
3.31.2017
https://www.transportation.gov/regulations/economic-valuesused-in-analysis
[11] L. Bell “10 Benefits of Self Driving Cars: Lower Fuel
Consumption”
Autobytel.
Accessed.
3.29.17
http://www.autobytel.com/car-ownership/advice/10-benefitsof-self-driving-cars-121032/
collection in real time acts as the eyes and ears of a navigation
system, allowing it to navigate through the streets without
relying on human control.
AUTONOMOUS VEHICLES ARE ON THE
RISE
Within the past 10 years, LiDAR technology has been
tested over millions of miles of road by Google, Caterpillar,
different universities, and other companies. Today, the
technology is ready for massive expansion in order to be
applied more widely and sold in the market for autonomous
driving and safety. Uber, a taxi company, provides a clear
image of the possibilities for autonomous cars. In the latter
portion of 2016, self-driving Ubers became a reality thanks to
their LiDAR equipped taxis. Uber’s Advanced Technology
Center in Pittsburgh, Pennsylvania began developing these
vehicles almost 2 years ago, and they have finally arrived. Due
to Pittsburgh’s difficult driving conditions involving weather,
traffic, and other factors, it was Uber’s first choice to
experiment with this technology. So far, it has been a great
success. Uber and its customers are extremely satisfied, and
expansion of LiDAR equipped taxis has begun in other areas
across the country such as San Francisco and Arizona.
As technology rapidly advances, LiDAR systems are
constantly improving. Decreases in price and improved
functionality make LiDAR an attractive tool for vehicle
manufacturers across the globe. LiDAR technology is paving
the road to an autonomous future which will consist of safer,
more reliable travel.
ACKNOWLEDGEMENTS
We would like to sincerely thank my neighbor, David, for
taking time out of his extremely busy work schedule to provide
us with an abundance of knowledge on LiDAR and vehicle
autonomy. We would also like to thank Professor Kovacs and
Patrick Lyons for providing insightful feedback on our paper.
SOURCES
[1] “What is LiDAR?” LiDAR-UK. Accessed 1.10.2017.
http://www.lidar-uk.com/how-lidar-works/
[2] “LiDAR: Driving the future of Autonomous Navigation”
Frost and Sullivan. 2016 Accessed 2.5.17
[3] J. Wong “Driverless cars have a new way to navigate in
rain
or
snow”
3.14.2016
Accessed
2.27.2017
https://qz.com/637509/driverless-cars-have-a-new-way-tonavigate-in-rain-or-snow/
[4] D. Rice. Conversation on LiDAR. UBER Advanced
Technologies Center. 2.25.17
[5] C. Flemming “Tesla car mangled in fatal crash was on
Autopilot and speeding, NTSB says”. 1.7.2016. Accessed
1.10.2017
http://www.latimes.com/business/autos/la-fi-hyautopilot-photo-20160726-snap-story.html
[6] “Puck Hi-Res – High Resolution Real Time 3D LiDAR
Sensor”
Velodyne
LiDAR.
Accessed
2.27.2017
http://velodynelidar.com/docs/datasheet/63-9318_RevB_Puck%20Hi-Res_Web.pdf
[7] E. Ackerman “Cheap Lidar: The Key to Making SelfDriving Cars Affordable” IEEE Spectrum. 7-2016. Accessed
1.6.2017. http://spectrum.ieee.org/transportation/advanced cars/cheap-lidar-the-key-to-making-selfdriving-carsaffordable
8
Garrett Hagen
Daniel Mingey
9