VIEW PDF

a quarterly journal of KPIT Technologies Limited
TechTalk@KPIT
VOL. 6, ISSUE 4 OCT - DEC 2013
Autonomous Vehicles
l
Journey Without Driver
l
To Be or Not To Be... A Driver
l
Seeing Through Sensors
l
Bringing Vision to Life
Drive-By-Wire : A Case Study
l
Wired Through Wireless
l
Inside Connected Vehicle
l
Gazing Through a Crystal Ball
l
Colophon
TechTalk@KPIT is a quarterly journal of
Science and Technology published by
KPIT Technologies Limited, Pune, India.
Guest Editorial
Dr. K. P. Soman
Centre for Excellence in Computational
Engineering and Networking,
Amrita Vishwa Vidyapeetham,
Coimbatore, India.
Chief Editor
Dr. Vinay G. Vaidya
CTO,
KPIT Technologies Limited,
Pune, India
[email protected]
Editorial and Review Committee
Ankita Jain
Suresh Yerva
Reena Kumari Behera
Pranjali Modak
Pramit Mehta
Mayurika Chatterjee
Prasad Pawar
Vinuchackravarthy Senthamilarasu
Designed and Published by
Mind’sye Communication, Pune, India
Contact : 9673005089
Suggestions and Feedback
[email protected]
Disclaimer
The individual authors are solely responsible
for infringement, if any.
All views expressed in the articles are those
of the individual authors and neither the company
nor the editorial board either agree or disagree.
The information presented here is only for giving an
overview of the topic.
For Private Circulation Only
TechTalk@KPIT
Contents
Editorial
Guest Editorial
Dr. K. P. Soman
2
Editorial
Dr. Vinay Vaidya
3
Profile of a Scientist
19
Sebastian Thrun
Prasad Pawar
Book Review
Autonomous Intelligent Vehicles : Hong Cheng
Naveen Boggarapu
45
Articles
Journey Without Driver
Smitha K P
4
To Be or Not To Be... A Driver
Priti Ranadive and Pranjali Modak
12
Seeing Through Sensors
Vinuchackravarthy Senthamilarasu
20
Bringing Vision to Life
Jitendra Deshpande
26
Drive-By-Wire : A KPIT Case Study
Vinod Singh Ujlain
32
Wired Through Wireless
Arun S. Nair
38
Inside Connected Vehicle
Mushabbar Hussain
46
Gazing Through a Crystal Ball
Krishnan Kutty and Charudatta B. Sinnarkar
52
TechTalk@KPIT, Volume 6, Issue 4, 2013
1
Guest Editorial
In 1953, US Air Force drew a plot on a piece of paper, the rate of change of aerospace industry in US starting from wright
brothers. The curve rose exponentially starting from thirties and as per the curve a trip to the moon was possible within the
next two decades. But nobody then knew how to achieve the feat with the prevailing technology. The curve proved to be
right, though not politically, as USSR put the first satellite into space in 1957. It was followed by a series of manned and unmanned moon-mission by both USSR and US culminating in the historic landing of man on the moon in 1969, four years
ahead of predicted date. Peter H. Diamandis and Steven Kotler in their book “Abundance – The future is better than you
think” draw similar curves for us to peek into the future. As per the book, mankind is going to enter into a new phase of
Dr. K. P. Soman
Centre for Excellence in
Computational Engineering
and Networking,
Amrita Vishwa Vidyapeetham,
India
civilization in which many resources which we assumed will be scarce forever is going to be abundant. Once aluminum –
the third most abundant element on the earth's crust- was a costly metal but the emergence of the technology of
“Electrolysis” enabled us to “access” it, almost freely. An array of exponentially growing and enabling technologies are now
converging on to make Energy, drinking Water, Health care and Education“accessible”to all at no cost (almost). A similar
picture of the future can also be seen in the book “Makers, the new industrial revolution” by Chris Anderson. Solar and
Synthetic algae (producing hydro carbons) based power sources are likely to be the main motive force of vehicles in the
near future. Space exploration – land, under water and interplanetary – will be one of the major occupation of the future
generation.
Though there are several driving forces behind this revolution, ever spreading Internet, open source culture, DIY (Do It
Yourself) movement and crowd funding (like kickstarter) are the main drivers. Web democratized the tools both of invention
and of production. It brought people with similar ideas together. It also brought people with ideas for product/services and
people with money for the same product/services bringing out the so called “network effect”. One such network effect is
ever spreading crowd funding movement where a few individuals could beat multinational organizations on innovative
product development with the cost unimaginable to multinationals. The “rising billions” – the common man with internet
access whose ideas were never accessed so far for innovation are going to be main source of future innovations – not the
top universities and giant R&D organizations- in this new world order. Ideas are biological in nature. It meet, mate and
mutate producing new innovations. More the interactions, more the pace of innovations. Internet is enabling such
interactions on a global scale.
How is this going to impact automotive industry? The ability to go where we want, whenever we want is an eternal dream of
the mankind. Self-driving vehicles will make that dream a reality. Mankind may achieve this target by 2020. The
implications of such systems would be profoundly disruptive for almost every stakeholder in the automotive ecosystem.
Companies are already into developing sensor-based, driver-assisted solutions, which use stereo cameras, lasers,
LIDARS and software and complex algorithms “to compute the three-dimensional geometry of any situation in front of a
vehicle in real time from the images it sees”. Connectivity based system that uses wireless technologies to communicate in
real time from vehicle to vehicle (V2V) and from vehicle to infrastructure (V2I), and vice versa is other path to know the
environment. What is needed further is the convergence of these solutions and a human-like cognitive system for learning
and taking decisions on the spot. Current, Artificial intelligence systems cannot yet provide that level of inferential thinking
though Google achieved it partially with exorbitant cost.
The hurdles in the path of convergence are 1) improved positioning technology, 2) high resolution mapping, 3) reliable and
intuitive man-machine interface, and 4) standardization. A look at the video in youtube titled “Spy drone can see what you
are wearing from 17,500 feet” is already a viable solution to the first two hurdles especially in crowded cities. Further
innovation by the “rising billions”, probably through balloon based multi-camera surveillance or similar system can provide
ultra low cost solutions. The research in “deep learning networks” and ever-increasing computational capabilities of
modern day computers is forging ahead to solve the problems in inference thinking. At present computational capability of
our lap-top computers are of the order of 1011 where as human brains have capability of the order of 1026. Moore's law then
indicate that it only takes 30 years for computers to become more powerful than human beings putting the upper limit for an
autonomous vehicle to happen in 30 years. Wish all of you a happy ride on that type of car to your dreamland.
2
TechTalk@KPIT, Volume 6, Issue 4, 2013
Editorial
Sometime ago I was talking with one of the engineers working with an automotive OEM in France. We started talking about the
complexities in airplane design and automotive design. Having worked in the aerospace industry on the autopilot design for many
years, I was of the opinion that it is the addition of one more dimension in space that makes a world of difference in adding complexity.
My automotive engineer friend on the other hand has spent all his engineering career in automotive engineering and he was not ready
to agree with me. He said aerospace engineers have to deal with that third dimension but it is lot easier to deal with that than dealing
with randomly walking pedestrians on the street.
It got me thinking about real scenes on a street. The very reason why one would like to have a driverless vehicle is to handover the
Dr. Vinay G. Vaidya
complexity of maneuvering through traffic to computers. The problem statement for designing an autonomous vehicle is fairly simple.
CTO
It can be written as follows.
KPIT Technologies Limited,
To design a driverless vehicle to go from place A to place B, do not bump into anything, follow all traffic rules, and ensure proper
Pune, India
functioning of the vehicle. Let's take each one of the sub requirements.
To go from place A to place B, one needs to know where the vehicle is, where is the place B, directions to go from A to B. All this is
possible with the help of a GPS. Thus, first major component of this system would be a GPS.
What does 'do not bump into anything' mean? The vehicle needs to know what is around it. What action taken by the vehicle could lead
to bumping? Every action taken by the vehicle has to be well thought out to ensure that none of the actions would lead to adverse
reaction. Now we slowly start seeing the complexity. One needs to have fairly complex sensor system in place. These sensors could be
optical (cameras), ultrasound, radar, LIDAR (radar based on laser), or infrared. Thus, sensor systems is the second major component
in our design.
Following all traffic rules requires one to know a few things. First of all we need to know where the signals and traffic signs are. Having
identified them we need to understand what they mean. One also needs to know other rules and regulations regarding driving in
general and speed limits in particular. Regulations forms the third system component.
While doing all this if your car is not working properly then we will not go anywhere. When we go out on a long distance trip, what do we
check? We check fuel, oil, water, and tires. While driving we keep an eye on the engine temperature. Now while we sit back and relax in
an autonomous vehicle, someone else has to keep an eye. We have another way of eyeing the system and that is through sensors.
Our system should check all this at all times. Therefore, the fourth component is the health monitoring system.
With more and more systems getting added one cannot forget that we have to move the car. All electronics should work in conjunction
with mechanical systems. That constitutes our fifth component called mechatronics.
All these system components would not be able to do anything unless someone gathers all the information, analyzes it, and comes up
with a concrete action to be taken by some sub system. This requires a command, control, and communication unit. This unit is our
sixth component.
Although we have all the systems in place, there is no guarantee that we can acquire information, assimilate it, and give out proper
guidance in a timely manner. The command control system may take long time to make a decision. It may tell you to stop but in the
mean time you have already hit a pole on the street! This necessitates use of fast processing using multicore processors.
We now have all the components ready but it has no life in it. Our system will never be able to “think”. We need to develop algorithms to
make the system mimic thinking. These algorithms should be able to capture signals, get live streaming videos, find cars in front of your
car, and find cars behind your car as well as on the side. It should calculate speeds of all these vehicles. If you want to increase speed it
should tell you not to, since the car in front of you is too close. If you want to make a lane change then it should tell you that there are cars
on the side. It should also watch for pedestrians during day time and night time. It should watch for any other obstacles. It should
observe lane markings. It should read traffic signs and check speed regulations. It should monitor health of the car. In case of any
issues within the command control system, it should fall back on a redundant system. Let's have our fingers crossed for not having any
intruder getting into the system to cause havoc. One should have tight cyber security system in place. Of course, it cannot lose track of
where we are headed! The system requires a good scheduling algorithm to schedule tasks on different processor to ensure real time
performance from multicore processors.
Yeah, I understand the complexity better now. Anyone out there with any doubts? My automotive engineer friend would be very glad
to read this confession.
Please send your feedback to :
[email protected]
TechTalk@KPIT, Volume 6, Issue 4, 2013
3
4
TechTalk@KPIT, Volume 6, Issue 4, 2013
Journey Without
Driver
About the Author
Smitha K P
Areas of Interest
Multicore Programming,
Embedded Programming
TechTalk@KPIT, Volume 6, Issue 4, 2013
5
I. Let's Begin
Also, the robotic properties of autonomous
cars will help people to reach their destination
on time. People can enjoy their trip by enjoying
music, videos, games etc. Implementing
autonomous trucks will save money and time
for long travel. Trucks can be run for 24 hrs
since there is no need for drivers to relax.
People always dream of newer and better
machines, which can complete tasks faster
than humans. When radio was introduced,
people wondered whether they would be able
to see the people speaking on the radio. When
the first generation computers were
introduced, which were as big as a room,
people did not visualize the possibility of small
size computers in future. People started
thinking about flying and self-driving vehicles
after seeing first generation automobiles on
the street. Researches from all over the world
are conducting research to bring autonomous
vehicles on road. Hope manufacturers shall
introduce autonomous vehicles in the next few
years. The technology evolved during the
research in the area of autonomous vehicles,
needs to be refined for commercial use. In
2010, Intel CTO predicted that driver-less cars
might be available within 10 years [14]. Many
companies carry out projects in order to
implement intelligent autonomous vehicles
and transportation networks. Imagine the
scenario of roads filled with intelligent selfdriving vehicles. It is really an exciting thought,
isn't it? Will it be true in near future?
Autonomous vehicle will be the best option for
physically challenged people, who cannot
drive vehicle on their own. More importantly,
they can travel from one place to another on
their own without depending on others. People
above 45 age will be more benefited by the
autonomous cars. In our total population, age
of 42% of people is 45 and above as shown in
Fig. 1. In the modern age, we are not getting
enough time to spend with our parents or
grandparents. If they can travel to meet their
relatives and friends, their retirement life will
be great and of course that will keep them
relaxed.
II. Why do we need Autonomous
Vehicles?
Most of us use vehicles to go to work,
shopping malls, visiting friends and to many
other places. In addition, the economy of a
place largely depends on the goods delivered
by trucks. People hardly realize that
transportation forms the basis of our
civilization. Because of the increase in
population, the traffic is also increasing day by
day. This has many adverse effects in today's
busy life. Additionally, accidents due to
careless or inattentive driving, sudden illness
of driver, consumption of alcohol, failure of
vehicle, etc. would be prevented by the
introduction of autonomous vehicles.
Because of the faster reaction time,
automated systems will help people to avoid
accidents and reduce traffic congestion
problems. The future vehicles will be capable
of determining the best route and warn other
vehicles about the traffic conditions ahead.
6
TechTalk@KPIT, Volume 6, Issue 4, 2013
Demographic
Population Percentage of
Total
Digital Natives (0-14 years)
49 million
16%
Gen Now (15-34 years)
84 million
28%
Gen X (35-44 years)
43 million
14%
Baby Boomers (45-65 years)
80 million
26%
Older Adults (66+ years)
47 million
16%
Figure 1: Demographic breakdown of population [18]
III. Autonomous Evolution
–Walkthrough
Watching a kid learning how to stand and walk
is really a delightful moment. Improvement
and evolution sounds good irrespective of the
technology with which it is related. We feel
happy by seeing the different stages of
improvement in the research of autonomous
vehicles on road as well. It's not possible to
explain about all the researches happened all
over the world. This walkthrough gives you
some of the outstanding researches and
information about the people's effort on those
researches.
A. Stanford Cart - The First Smart Car
The first milestone in the area of research of
autonomous vehicle on road is the
introduction of Stanford cart, shown in Fig. 2.
The story behind Stanford cart is quiet
interesting. Mechanical Engineering (ME)
graduate student James L. Adams originally
constructed the Stanford Cart to support his
research on the problem of controlling a
remote vehicle using video information in the
year 1960-61 [1]. After the research, the cart
left unused in a Mechanical Engineering
laboratory until Les Earnest joined the
Stanford Artificial Intelligence Lab (SAIL) as
Executive Officer in 1966. He found the cart
and decided to use it for making a robot road
vehicle using visual intelligence. However, the
radio channels and other electronic
equipment that had existed in the cart were not
working perfectly. Therefore, he recruited
Rodney Schmidt, who was a PhD student in
Electrical Engineering, to build a low power
television transmitter and radio control link to
undertake the visual guidance project.
SAIL (Stanford Artificial Intelligence Lab)
granted TV license for experimentation by the
Federal Communications Commission, which
helped to evolve the first smart car. The first
experimentation on the smart car began with a
human operator-controlling cart via computer
based on television images. Research
students drove the cart around the
neighbourhood without any human control. By
seeing the working of first smart car, Prof.
John McCarthy, Director of SAIL became
interested in the project and took over its
supervision [1]. The cart is rebuilt with KA10
processor and it ran at about 0.65 MIPS
(Million Instructions Per Second). They were
able to make the cart automatically follow a
high contrast white line under controlled
lighting conditions at a speed of about 0.8
mph. Later, the cart was re-built with greater
intelligence and image processing capabilities
by Hans Moravec. The car successfully
travelled through a room with obstacles in
about 5 hours. The Stanford cart ranked 10th
on Wired's list of the 50 best robots ever [7].
Figure 2: Stanford Cart configured as an
autonomous road vehicle at SAIL [1]
In 1977, Tsugawa and his colleagues at
Japan's Tsukuba Mechanical Engineering
Laboratory introduced the first truly
autonomous car, which could process images
on the road ahead. The car was equipped with
two cameras with analog signal processing.
By aiding through an elevated rail, the car was
able to run with a speed of 30 km/h (18.6 mph)
[7].
B. Test Vehicles for Autonomous Mobility
and Computer Vision
In 1980s, a German aerospace engineer Ernst
Dickmanns at Bundeswehr University located
in Munich, inaugurated a series of projects
called VaMoRs (Versuchsfahrzeug fuer
autonome Mobilitaet und Rechnersehen – in
German). The vehicle used for the projects
had two sets of cameras placed relative to
each other in both the front and the rear of
windshield to get a better vision. In addition,
there were two miniature CCD-cameras to
exploit the multifocal vision. There were 16-bit
Intel microprocessors and many other
sensors and software in the car. The car drove
with speed more than 90 km/h (56 mph) for
roughly 20 kms. He earned the sobriquet "The
pioneer of the autonomous car" for his efforts
[7].
Another project VaMP (Venus Atmospheric
Maneuverable Platform) was introduced after
seven years, with four cameras and out of that
2 cameras could process 320 by 240 pixels
TechTalk@KPIT, Volume 6, Issue 4, 2013
7
5
per image at a range of 100 meters. The car
was able to recognize road markings, its
relative position in the road and the presence
of other vehicles. Later in a test drive, with
simulated traffic near Paris, the car drove with
a speed of 130 km/h (81 mph), even judging
whether it was safe to change lanes. This was
considered as an important milestone in the
evolution of autonomous vehicles.
Dickmann's team drove a Mercedes S-Class
car from Munich to Odense. It was 1,600 kms
trip that was completed with a speed of 180
km/h (112 mph). The car travelled about 95%
of the total distance fully automatically [7].
research in area of development of driverless
cars. Figure 5 shows the inner view of the car
used for the project.
Figure 5: Inner view of vehicle used for Prometheus project [16]
D. NavLab 5
The main attraction of the above two projects
are the cameras used. The camera
configuration used for the above two projects
is MarVEye (Multi-focal active / reactive
Vehicle Eye) [6]. A pan-tilt camera head
(TaCC) is used in the cars in both the projects
and the MarVEye camera is mounted on that.
The viewing direction of the TaCC can be
controlled in pan by turning plus/minus 70°
and a good horizontal coverage can be
obtained. Figure 3 and 4 show the TaCCs of
the test vehicles VaMoRs and VAMP [6].
Figure 3: TaCC of VaMoRs[6]
Figure4: TaCC of VaMP[6]
C. EUREKA Prometheus Project
One of the largest research happened in the
area of implementing driverless cars, is
Prometheus (PROgraMme for a European
Tr a f f i c o f H i g h e s t E f f i c i e n c y a n d
Unprecedented Safety) project by the
European Commission named EUREKA [16].
They offered more than 1 billion dollars to the
participants. Ernst Dickmanns and his team
were the key participants. The race started
with twin robot vehicles, VaMP and VITA-2 and
could travel around 600 miles in Paris multilane highway during heavy traffic condition.
This project really encouraged further
8
TechTalk@KPIT, Volume 6, Issue 4, 2013
Navlab 5, shown in Fig. 6, used for on-road
navigation experiments was introduced in
1990 [2]. Navlab 5 is a Pontiac Trans Sport
model. It has a PANS (Portable Advanced
Navigation Support) platform, which provides
a computing base and input/output
environment for users. The power source for
PANS platform is vehicle's cigarette lighter,
which is completely portable. In addition, the
PANS platform supports steering wheel
control, position estimation and safety
monitoring.
Some of the researchers from Carnegie
Mellon University drove NavLab 5 in 1995
from Pittsburgh to Los Angeles [7]. The car
supported lane keeping for more than 600
miles because of the availability of PANS
platform. NavLab 5 could complete almost
98% of the total distance fully automatically
with a negligible help for obstacle avoidance.
Figure 6 : NavLab 5 [2]
Figure7: PANS inside the Navlab 5 [2]
E. Grand Challenges by DARPA (Defence
Advanced Research Projects Agency)
A great milestone in the evolvement of
autonomous vehicles is the first long-distance
competition organized by DARPA for
autonomous vehicles in 2004. The Grand
Challenge inaugural race was a 150-mile
course through Mojave. Out of the 15 vehicles
competed in the race, Carnegie Mellon's Red
Team Racing – 'Sandstorm', shown in Fig. 8,
completed 7.3 miles [7].
Sandstorm was introduced as a 1986 model
998 HMMWV (High Mobility Multi-purpose
Wheeled Vehicle) [8]. Sandstorm has a fresh
engine and suspension. In addition, the shock
isolation capability of Sandstorm softens the
ride for computers and sensors. Acceleration
control, braking and shifting of the vehicle are
handled by drive-by-wire modifications. The
inter module communication is accomplished
through TTP (Time Triggered Protocol).The
vital sensors of Sandstorm are vision, radar,
laser, and GPS sensors.
Later in 2005, DARPA organized a Grand
Challenge with doubled prize money ($2
million). Around 23 teams participated for the
132miles race through Mojave. Five of them
could reach the finishing point. On the race
path, there were 3 tunnels and more than 100
turns. In addition, the vehicle had to navigate a
steep pass with sharp drop-offs. It was really a
challenging run. An autonomous Volkswagen
Tourareg, named 'Stanley', shown in Fig. 9, by
Stanford University won first place in the race
and completed course in 6 hours and 54
minutes [7].
The special sensors present in Stanley, which
help it to see road, include radar, lasers, and a
camera system. Advanced computer system
and artificial intelligence helped Stanley to
have sense of its environment and avoid
obstacles [9].
DARPA made things a little tougher in the
grand challenge organized as a 60miles race
in an urban environment in 2007. Eighty-nine
teams entered for the race, and 11 made it to
the start. The race path had 4 miles of k-rail
enclosed streets, where entrants had to
handle manned-vehicle traffic at the former
George Air Force Base. Tartan racing team of
Carnegie Mellon's university completed the
race in 4 hours and 10 minutes with a
Chevrolet Tahoe 'Boss', shown in Fig. 10 [7].
'Boss' was developed by the Tartan Racing
with the help of General Motors and other
partners. The computer controls with radar
and GPS systems help Boss to map the
surrounding environment and detect potential
obstacles. Boss determines safe driving
routes by making use of lasers, intelligent
algorithms, and computer software. Larry
Burns, the GM Vice President of Research
and Design, explains, "Not only can we use
electricity in place of gasoline to propel the
next generation of vehicles, the electronic
technology in vehicles such as Boss can
provide society with a world in which there are
no car crashes, more productive commutes
and very little traffic congestion" [12].
Figure 8: Sandstorm [7]
Figure 9: Stanley [15]
Figure 10: Boss [7]
We can have a look in to some of the other
researches happened during the same
decade in all over the world. In ‘AHS
(Automated Highway System) Demo'97'
organized by San Diego, California, more than
20 fully automated vehicles traveled on a
highway in San Diego. Another outstanding
project was CARSENSE, which concentrated
on slow driving in some hectic situations like
traffic jam. Japan had also organized a demo
race named 'AHSRA (Advanced Cruise-Assist
Highway System Research Association)
Demo 2000' around the same time, where the
race showed how we can reduce road
accidents by autonomous vehicles with limited
driver intervention. During 2001 – 2004, a
project ARCOS (Research Action for Secure
Driving) is introduced by France, which was
aimed to reduce road accidents by 30%.
INVENT (Intelligent traffic and user-oriented
technology) is another important research
project in the area of introducing self-driving
cars on road, by Germany. The main goal of
the project was to reduce the traffic congestion
and improve people's safety by using
intelligent systems available in the
autonomous vehicle.
TechTalk@KPIT, Volume 6, Issue 4, 2013
9
5
F. Intercontinental Autonomous Challenge
The most challenging autonomous car
journey was conducted by Parma's VISLAB
(Visualization and Intelligent Systems
Laboratory) in 2010. The journey was from
Parma to Shanghai, and it took 100 days to
cover 16,000 kms through nine countries. In
Russia, the team gathered a record: 'The first
autonomous vehicle which is ticketed by a
traffic cop' [7].
We can just have a look at some key
characteristics of the vehicle, shown in Fig. 11
that was used for road trip. The sensing
system of the vehicle was based on cameras
and laser scanners. 5 forward and 2 backward
looking cameras were installed on the vehicle.
4 laser scanners with different characteristics
were placed around the vehicle. The obstacles
and lane markings on the road were located by
the forward and backward vision systems. In
addition, the laser scanners available in the
vehicle were used to detect vehicles in front
and other vehicles [3].
The full control of speed and steering of the
vehicle is handled via CAN messages,
through the x-by-wire equipped in the vehicle.
Fig. 12 shows the TopCon steering, which is
configured to capture commands from a CAN
bus and control the steering. Fig.13 shows the
VisLab board to interface the CAN bus with the
gas control [3].
Figure 11
Figure 12
Figure 13
Figure11: One of the vehicles used during road trip [3]
Figure12: The drive by-wire steering system [3]
Figure13: The custom board controlling the engine [3]
G. Shelley – An Audi Climbs the Mountain
An Audi named Shelley, shown in Fig. 14,
could reach summit of Pike's Peak in 27
minutes. The height of the mountain is almost
12.42 miles. The human record for climbing
Pike's Peak was 17 minutes and Shelley took
10 minutes more than that. In comparison to
the time taken by a steam powered car guided
by human being (took more than 9 hours),
Shelley's record is outstanding [7].
The Audi is named Shelley in honor of Michèle
10
TechTalk@KPIT, Volume 6, Issue 4, 2013
Mouton. She is an Audi rally driver, who is the
first woman who conquered Pikes Peak. The
selected model for Shelley is a 2010 TTS. It
features a fly-by-wire throttle, cruise control
(adaptive), a DSG gearbox (semiautomatic)
and other gadgetry [13]. The car is made fully
autonomous by using advanced algorithms
like Oracle Java real-Time System, Oracle
Solaris and GPS [13].
Shelley uses differential GPS to track its
location, even if the margin was larger on the
mountain. In Shelley, wheel-speed sensors
and an accelerometer measure the velocity
and gyroscope controls equilibrium and
direction [13].
Figure 14: Shelley [13]
H. Google Car – The Wonder
The autonomous car Toyota Prius hybrid by
Google, shown in Fig. 15, has successfully
covered 1,40,000 miles with only occasional
human intervention since hitting the road in
2010 [8]. Sebastian Thrun led the Google
driverless car program. The car successfully
navigated San Francisco's Lombard Street,
which has eight hair pin turns on one block.
Google believes that the technology of car will
be improved such that it will be safe,
congestion free, and with fewer emissions.
The heart of Google’s car system is a laser
range finder placed on the roof of the car. In
addition, the car carries other sensors, which
have: four radars, placed on the front and rear
bumpers. The traffic lights are detected by a
camera placed near the rear-view mirror. With
the help of the available GPS, wheel encoder
and inertia measurement unit, Google car
could determine the vehicle's location and
keep track of its movements [11].
A detailed 3D map of the entire environment is
obtained by laser range finder. The car's
position is being determined by using data
from Google Street View coupled with data
from cameras, LIDAR and radar. The car
combines the laser measurements with highresolution maps of the world and produce
different types of data models that allow it to
drive itself by avoiding obstacles and
respecting traffic laws [11].
Figure 15: Toyota Prius hybrid [11]
IV. Let's Wrap Up
By reading through the walkthrough section,
we came across different experimentations in
the area of research of autonomous vehicles
on road. It is really interesting to know the step
by step advancements in the research.
Researchers from all over the world put so
much effort and they could get good results
out of that. Stanford has introduced a car,
Stanford cart that can race through the Pike's
Peak race course. DARPA's grand challenge
introduced Sandstorm, Stanley, and Boss,
which are three precious stones in the
research of autonomous vehicles on road.
Vehicles used in VISLAB Intercontinental
Autonomous Challenge, could travel almost
16,000 kilometers automatically. Google
introduced a car that can drive by itself. By
looking at all these advancements, we can say
that we are going to reach the destination of
implementing autonomous vehicles on road in
near future. We can look forward for a world
with intelligent and effective autonomous
vehicles running on the roads soon. As per my
opinion, handling the safety of people sitting in
the vehicle automatically is very critical. The
vehicles should be so intelligent to avoid
accidents on the road. I think the researches,
which will be happening near future, will be
handing all these critical points. Some of the
interesting questions, which are appearing in
my mind are, who will become the pioneer in
autonomous vehicle exploration field? What
can we expect further in this area in the
coming 10 – 20 years? How the laws of
government handle the accidents caused by
autonomous vehicles? One thing is true that
people shall not longer worry about obtaining
driving licences!!!
References
[1] Les Earnest, “Standford Cart”, December 2012.
Available: http://www.stanford.edu/~learnest/cart.htm
[2] Todd Jochem, Dean Pomerleau, Bala Kumar, and Jeremy
Armstrong, “PANS: A Portable Navigation Platform”, IEEE
Symposium on Intelligent vehicle, September 25-26, 1995,
Detroit, Michigan.
[3] M. Bertozzi, et al., “The VisLab Intercontinental Autonomous
Challenge”,2010. Available: http://www.ce.unipr.it/people/
cattani/publications-pdf/itswc2010.pdf
[4] Byron Spice, Anne Watzman, “Carnegie Mellon Tartan
Racing Wins $2 Million DARPA Urban Challenge”, 2007.
Available: http://www.cmu.edu/news/archive/2007/November
/nov4_tartanracingwins.shtml
[5] Chuck Squatriglia, “Audi's Robotic Car Climbs Pikes
Peak”,2010. Available: http://www.wired.com/autopia/2010/11
/audis-robotic-car-climbs-pikes-peak/
[6] Dennis fassbender, “MarVEye and its control system”, 2007.
Available: http://www.unibw.de/lrt8/forschung/
geschichte/marveye
[7] Tom Vanderbilt, “Autonomous Cars through the Ages”, 2012.
Available: http://www.wired.com/autopia/2012/02/
autonomous-vehicle-history/
[8] Red Team, “Sandstorm”, 2004.
Available: http://www.cs.cmu.edu/~red/Red/sandstorm.html
[9] San Jose, “Autonomous Volkswagen Touareg ''Stanley,''
First-Ever Winner of the DARPA Grand Challenge”, June 17,
2008. Available: http://www.bloomberg.com/apps/news
[10] General Motors, “See the Tahoe Boss, A car that literally
drives itself”,2008, Available: http://www.gm.ca/gm/english/
corporate/chevrolet/ton/ne_may08
[11] Erico Guizzo, “How Google's Self-Driving Car Works”,
2011. Available: http://spectrum.ieee.org/automaton
/robotics/artificial-intelligence/how-google-self-drivingcar-works/
[12] General Motors, “General Motors demonstrates
self-driving Chevrolet Tahoe 'Boss' at consumer electronics
show”, 2008. Available: http://www.domain-b.com/companies/
companies_g/General_Motors/20080109_chevrolet_tahoe.html
[13] Chuck Squatriglia, “Audi's Robotic Car Drives better
than you do”, 2010. Available: http://www.wired.com/autopia/
2010/03/audi-autonomous-tts-pikes-peak/
[14] Robokingdom LLC, “Autonomous Cars”, 2010.
Available: http://www.autonomouscars.com/
[15] Sebastian Thrun, et al., “Stanley: The Robot that Won
the DARPA Grand Challenge”, 2006. Available:
http://www-robotics.usc.edu/~maja/teaching/cs584/
papers/thrun-stanley05.pdf
[16] Diamler, “EUREKA Prometheus Project”, 1987.
Available: http://www.fastcompany.com/3010645/
here-come-the-autonomous-cars#2
[17] Alex Forrest and Mustafa Konca, “Autonomous Cars
and Society”, 2007. Available: http://www.wpi.edu/Pubs/
E-project/Available/E-project-043007-205701/
unrestricted/IQPOVP06B1.pdf
[18] “Self-Driving Cars: The next Revolution”, KMPG report,
2012.
TechTalk@KPIT, Volume 6, Issue 4, 2013
11
5
12
TechTalk@KPIT, Volume 6, Issue 4, 2013
To Be or Not To Be...
A Driver
About the Author
Priti Ranadive
Areas of interest
Parallel computing,
OS & RTOS,
Embedded Systems and TRIZ
Pranjali Modak
Areas of Interest
IPR, Patents
TechTalk@KPIT, Volume 6, Issue 4, 2013
13
5
I. Background
Human beings have been driving cars since
as early as the seventeenth century. Most of
us have driven a car from one place to other.
Cars have become a convenient and
preferred mode of transport for people all over
the world. However, the increase in number of
vehicles has also increased the number of
accidents and casualties. Some statistics by
'Association of Safe International Road Travel'
on road accidents have been listed below:
Nearly 1.3 million people die in road
l
crashes each year, on average 3,287
deaths a day. An additional 20-50 million
are injured or disabled.
Road traffic crashes rank as the 9th leading
l
cause of death and account for 2.2% of all
deaths globally.
Over 90% of all road fatalities occur in low
l
and middle-income countries, which have
less than half of the world's vehicles.
Road crashes cost USD $518 billion
l
globally, costing individual countries from
1-2% of their annual GDP.
Unless action is taken, road traffic injuries
l
are predicted to become the fifth leading
cause of death by 2030.
Table 1 shows the percentage break up for
different reasons which caused the accident
and percentage break up for people who were
injured in the accident.
Table 1: Percentage Break-up
Based on these statistics, it has become very
important to come up with solutions for road
and vehicle safety. One of the proposed
solutions could be the use of autonomous
14
TechTalk@KPIT, Volume 6, Issue 4, 2013
vehicles. The current century has presented
cars that drive human being from one place to
other. Autonomous vehicles are the topic of
research and everyone is trying their best to
move closer to bringing fully autonomous
vehicles on road. In the next couple of
decades, we will see autonomous vehicles on
the road everywhere.
II. Introduction
Everyone who has driven a car knows how
and what it takes to drive a car. Various
functions and aspects of the human body are
utilized when we drive a car. When we drive a
car, we use our brain, our nervous system, our
senses, our reflexes, our thoughts and our
intuitions. Our brain is the CPU which receives
messages from all over the body and
transmits them through the nervous system to
the proper body parts, which act upon it. The
thoughts in our brain are only ours and
completely secure, the instructions given by
the brain to different body parts are fail-safe
and the execution is instantaneous. By using
all of these, we are able to drive well in various
traffic conditions and various terrains.
Human beings are able to multi task while
driving a car we can drive and talk on the
phone, we can drive and listen to music, we
can drive and look around, we can drive and
surf the internet and so on. While doing all this,
we are still able to concentrate on our driving
as our senses and reflexes are tuned to the
vehicle speed, the traffic, the surroundings,
the road signs, etc.
For an autonomous vehicle to be successful, it
needs to emulate the human system. The
autonomous vehicle needs to have a brain, a
nervous system, various senses, thoughts
and reflexes just like the human body to
process information and data from various
sources and undertake multiple tasks. The
autonomous vehicle needs to have an ECU as
efficient and intelligent as the human brain, a
processing system as strong as the human
nervous system, sensors as sharp as the
human senses and data and information as
secure as the human thoughts- which no one
can hack.
When the autonomous vehicle completely
emulates the human system, then the vehicle
will begin to think and drive like a human being
without any technical limitations, glitches or
drawbacks. Moreover, it can even go beyond
and overcome certain limitations of the human
body and hence avoid human errors which
lead to vehicle accidents currently.
Fig. 1 illustrates the overall basic system
components required for an autonomous
vehicle.
Figure 1: Basic System Components
III. Multi-tasking and Faster
Processing
As mentioned above let us look at the
multitasking capability of a human brain. For
example, consider a situation which a human
brain handles while driving a car. The brain is
able to capture the image of what lies infront of
the vehicle. The brain is then able to
distinguish between pedestrians, vehicles,
different types of vehicles, their distance from
own vehicle, traffic signs and signals etc. The
brain can process and distinguish between all
this in a very short amount of time. For an
autonomous vehicle to do so, it would require
to capture images of the surroundings and
process them as fast as the human brain.
However, the current Advanced Driver
Assistance Systems (ADAS) applications
involve algorithms that are complex. The
complexity arises from the need to distinguish
pedestrians, vehicles, types of vehicles,
distance, etc. in a short time. This means that
the images captured from a vehicle that is
moving need to be processed at a speed that
is real time i.e. before the next frame is
captured the current frame should have been
processed to locate various objects
mentioned above. The real time processing
rate expected in any ADAS application is 20
frames per second (FPS) i.e. the ADAS
system should be able to process 20 frames in
a second so that the system is dependable
enough to take decision in real time. With the
current complexity of some algorithms this
rate is not more than 5 to 10 FPS. This means,
there is a need to either change the algorithm
to reduce complexity or find ways to
implement them faster.
ADAS systems have used Digital Signal
Processing (DSP) based embedded platform
over the past few years. However, these
applications are implemented and optimized
for particular hardware to achieve real time
performance. There is still a need to improve
the embedded hardware used for ADAS
applications. Very recently automotive chip
manufacturers have brought multicore
processors into market. Multicore processors
can solve the real time performance related
issues in ADAS applications. It would be
possible to implement multiple applications on
a single embedded platform. These
applications would then be able to process
video frames in parallel so as to achieve real
time performances.
Let us look at a case study undertaken at KPIT
Technologies Ltd. that implemented ADAS
applications on multicore embedded
platforms. In this case study, we implemented
the Lane Departure Warning System (LDWS),
Forward Collision Warning System (FCWS)
and the Traffic Sign Recognition System
(TSRS) on Renesas R-Car series and the
Freescale i.MX6 board. The project used a
proprietary tool called YUCCA, a fully
automatic code parallelization tool, to
parallelize the LDWS application.
The YUCCA tool converts sequential
application source code to parallel application
source code. It is a static analysis tool that
performs dependency analysis based on
functions, pointer, variables, loops, control
statements, etc. All this information is used to
partition code sections that can execute in
parallel. The tool also inserts synchronization
TechTalk@KPIT, Volume 6, Issue 4, 2013
15
5
for detected dependencies at appropriate
places in the source code. The tool can
parallelize tasks as well as loops i.e. data. In
the current case study, since the application is
based on video or image processing, data
parallelization is preferred. Fig. 2 shows the
block diagram and basic features supported
by the tool.
Automatic
Parallelization
Tool (YUCCA)
Source
Code
Completely
automated
dependency
analysis
Task & loop
parallelization
Source to
source
conversion
Parallelized
Source Code
YUCCA TOOL
Static analysis
with profiling
Optimum use of
multicore hardware
clubbed together on a single platform.
Additionally, the analysis of frames that
concludes whether there is a vehicle or a lane
can be done in parallel since there would be
no dependencies among these algorithms.
From the above case study, it can be seen that
multicores and parallel processing of
applications can have huge positive impact on
the performance of an autonomous vehicle.
Similar to the brain processing and analyzing
multiple facts in parallel, the autonomous
vehicles would be able to process and analyze
in parallel.
If your autonomous car sees an accident and
knows that a person is trapped in the car,
would it stop to help like the human beings?
No manual
intervention
Figure 2: YUCCA Automatic Parallelization Tool by KPIT
Both the embedded platforms mentioned
above have a quad core processor. The
results achieved are as shown in table 2.
Table 2: Results of parallelizing LDWS application
on embedded platforms
LDWS
Application
Freescale i.MX6
platform
Renesas RCarH1 platform
Before Parallelization After Parallelization
(Frames Per Second) (Frames Per Second)
14
42
14
43
From the results achieved in the case study
project, it can be seen that the performance of
ADAS applications can be easily improved
and real time performance can be achieved.
Our next experiments that are in progress are
to port multiple ADAS applications on a single
embedded platform. Not all ADAS
applications can be ported to a single
embedded platform. However, we can choose
applications that use the same set of video
frames for processing. For example the LDWS
and the FCWS applications would need
forward looking camera and they can be
16
TechTalk@KPIT, Volume 6, Issue 4, 2013
IV. Optimized and Redundant
Sensors
A human being uses multiple senses while
driving a car. For example, with our eyes we
see the surrounding vehicles, road, buildings,
pedestrians, objects, lanes, traffic signs, etc.
With our ears, we hear vehicle sounds, music,
conversations, and various random noise in
the surroundings. Touch is used for
identification and authentication in biometrics.
With our nose, we can smell different
fragrances and odors in and out of the vehicle.
We can detect if our car emissions are higher,
we can smell the fragrances in the car, we can
detect if there is a leakage in the car, etc. With
our intuitions, we can at times predict if
someone is going to apply emergency brake,
if someone is going to take a turn, if someone
is going to overtake, etc.
An autonomous car needs to have sensors as
sharp as the human senses. Various sensors
used in the car, like camera, LIDAR, RADAR,
ultrasonic sensors, etc. provide vision to the
car. The sensors should be able to emulate the
human eye and provide us a panoramic view
along with the distance of the objects with
respect to the vehicle. The sensors should be
adaptable to provide vision to the vehicle
during various conditions, like day time, night
time, low light, bright light, low visibility, etc.
The sound sensors in the car should be able to
operate in multiple ranges and detect sounds
from multiple sources located at different
distances from the vehicle. The vehicle should
be able to correctly identify and authenticate
using the biometric features irrespective of
any transformation in the physical aspects of
the human being. The smell sensors in the car
should be able to identify various fragrances
and odors like the human nose. An
autonomous car with reflexes and intuitions
like the human mind, can provide instant
reactions in case of emergencies and hence
improved safety.
Human brain selectively processes all the
data received from various sensors and takes
necessary action. For example, while driving
in the city, our brain signals us to focus on
pedestrians and nearby vehicles as opposed
to other objects and to tune-in to important
sounds, like vehicle horns, unusual sounds in
vehicle, etc. as opposed to the other random
noise. Similarly, the autonomous vehicle
should have the intelligence to selectively
process the data gathered from various
sensors and take decisions.
Consider a situation where car driver is
engaged in a conversation with co-passenger
and his eyes are off the road. While he is in
conversation, he hears a screeching braking
sound nearby. Even though he is not looking at
the scene, the driver reflexively applies
brakes. This is an interesting feat that the
human brain can achieve; where if one sense
is diverted the other sense takes over helping
the brain to make decision. In a similar
situation, an autonomous vehicle should have
the redundancy feature, wherein if one sensor
fails, the other sensor data should be used for
decision making.
Thank God!
I don’t have to tolerate
any more driving
instructions from my wife.
Oh no! I cannot give
driving instructions to
this car.
In one case study taken up at KPIT
Technologies, we have developed an
automatic parking assist system on a
miniature car (Lab on Wheels) based on
ultrasound sensors [5]. The importance of
automatic parking assist is soon traversing
from just an add-on to becoming a necessity.
The motivation for the automation of parking
concept is to help the user to get the car
parked smartly with less effort. Finding safest
distance to park car, detection of obstacle &
the parking maneuver itself is a complicated &
tedious job for car driver. This system will help
the user to get car parked in available parking
slot with minimum efforts and will enhance
user safety with respect to parking the car.
We have developed a working solution of such
a parking application which guides the user for
getting a car parked in a real time
environment. In order to do so, we have used
a scaled car inclusive of a fully working
steering system and an electric drive train.
Ultrasonic sensors and position encoder have
been installed on this prototype and an
embedded system has been designed to
acquire signals from these sensors so as to
give necessary commands to the actuators.
Figure 3: Automatic Parking – KPIT Case Study
TechTalk@KPIT, Volume 6, Issue 4, 2013
17
5
The system collects data from the sensors
mounted on the chassis of car and accordingly
finds out the proper slot in the parking area in
order to park a car efficiently.
V. Conclusion
In this article, we have tried to analyze the
features required in an autonomous vehicle to
emulate a human brain. This emulation should
include multi-tasking, faster and selectively
processing and redundancy. Once the
autonomous vehicle is advanced enough to
emulate the human system, it could also be
improved further to overcome certain
limitations of humans and hence avoid errors
which currently are the causes of vehicle
accidents. Such an advanced autonomous
car will let you take a break from driving, and
for a change, it will drive you to your work.
It would be interesting to find out how many of
us would trust the advanced autonomous car
to drive them to work and how many would just
trust a friend to drop them to work.
CROSSWORD
1
4
References:
[1] Vinay Vaidya, Priti Ranadive, Sudhakar Sah,
“Method and System for Speeding Execution of
Software Code”, PCT/IN2009/000697
2513/MUM/2008, December 2008.
[2] Aditi Athavale, Priti Ranadive, M.N. Babu, Prasad
Pawar, Sudhakar Sah, Vinay Vaidya and Chaitanya
Rajguru, “Automatic sequential to parallel code
conversion: the S2P tool and performance analysis”,
Journal of Computing, Vol. 1, No.4, 2012.
[3] Association of Safe International Road Travel,
Annual global road crash statistics. Available:
http://www.asirt.org/KnowBeforeYouGo/RoadSafety
Facts/RoadCrashStatistics/tabid/213/Default.aspx
[4] Statistic Brain, Car crash fatality statistics.
Available: http://www.
statisticbrain.com/car-crash-fatalitystatistics-2/
[5] Krishnan Kutty Kongasary, Vijay Soni, Vinay
Govind Vaidya, “Sensor System for Vehicle Safety”,
EP2319031 A1, 2008.
ACROSS
2
3
5
6
7
8
4. He led the development of Google's selfdriving car
6. Process of unauthorized modification
7. Ability to see/ perceive through eyes
8. The act of making or enacting laws
9. She opened the box filled with evils
10. Without Driver
DOWN
9
10
Please send your answers to [email protected]
18
TechTalk@KPIT, Volume 6, Issue 4, 2013
1. Winner of DARPA Grand Challenge 2005
2. The process of accurately ascertaining one's
position and planning and following a route.
3. Combination of mechanical engineering,
electrical engineering, control engineering and
computer engineering
5. Winner of DARPA Urban Challenge 2007
Scientist
Profile
Scientist Profile
Sebastian Thrun
“Build it. Break it. Improve it.” the Universal Law of Invention by a
person who is an ALVA Award winner and the lead inventor behind
Google Self Driving Car, Google Glass and the education start-up
Udacity, Sebastian Thrun. He is currently working at Google as a
VP and Fellow, and a part-time Research Professor of Computer
Science at Stanford University.
to 2011, Thrun worked as a professor of computer science and electrical
engineering at Stanford and in 2011 worked as Research professor of
computer science.
In 2007 to 2011, he linked with Google during vacations, along with few
Stanford students. At Google, Thrun co-invented Google Street View
which got “Best 100 Products of 2008” award. This technology included
in Google Maps and Google Earth, provides panoramic views. It was
launched on May 25, 2007, in a number of cities in the United States and
they are in process of implementing the same in remaining areas which
includes cities and rural areas worldwide.
st
On 1 April, 2011, Thrun resigned from Stanford to join Google as a
Google Fellow. At Google, he started working on the development of the
Google driverless car system. This is a project initiated by Google that
involves developing technology for autonomous cars. These driverless
cars use video cameras, radar sensors and a laser range finder to get
traffic information, and also navigate the road ahead. About driverless
Sebastian Thrun was born on 14th May, 1967 in Solingen,
Germany. He is a son of Winfried and Kristin Thrun. In 1988, he
received his bachelor's degree (B.Sc.) in computer science,
economics, and medicine from the University of Hildesheim. He
received his master's degree (M.Sc) and PhD, in 1993 and 1995
respectively, in computer science and statistics from University of
Bonn.
automatic car Thrun says, "This is an opportunity to fix a really colossal,
big problem for society. Robot drivers don't drink, get distracted, or fall
asleep behind the wheel”. These driverless smart vehicles will reduce
the amount of road accidents, the use of fuel and gas releases
drastically according to Thrun.
On January 23, 2012, Thrun founded an online private educational
organization, 'Udacity' along with David Stavens, and Mike Sokolsky
In 1994, he started the University of Bonn's Rhino project together
with his doctoral thesis advisor Armin B. Cremers. His Ph.D. thesis
problem statement is “Explanation-Based Neural Network (EBNN)
Learning: A Lifelong Learning Approach”. The approach of EBNN
learning algorithm is to learn meta-level problems by learning a
theory of the domain.
that offers massive open online courses. According to Thrun, the name
Udacity comes from the company's objective to be "audacious for you,
the student".
Sebastian Thrun has received well know awards and recognitions for
his work in various fields. In 2013, he received ALVA Award by 99U,
which is given to a next great inventor who will not only imagine
In 1995, he started his career at Carnegie Mellon University (CMU)
in Computer Science Department as a research computer
scientist. In 1997, Thrun developed the world's first robotic tour
guide, along with his colleagues Wolfram Burgard and Dieter Fox.
He became an assistant professor and co-director of computer
science, robotics, and automated learning and discovery at CMU
in 1998. Later in the same year, the furtherance robot named
"Minerva" was set up in the Smithsonian's National Museum of
American History in Washington, where the robot guided tens of
thousands of visitors during a few weeks of deployment period.
While working with CMU, he introduced a new Master's Program in
Automated Learning and Discovery, which later became a Ph.D.
program. In 2001, he promoted as an associate professor at CMU.
In 2002, Thrun contributed in a project to develop mine mapping
robots, along with his colleagues William L. Whittaker and Scott
Thayer, two research professors at CMU.
incredible ideas but also implement it. In 2012, he is awarded as “Global
Thinker #4” by Foreign Policy in the list of top 100 Global Thinkers, The
Next Establishment by Vanity Fair, Top 100 Scientists on Twitter and
Initiative of the year Award by Chip.
In 2011, The Fast Magazine
Company called him fifth most creative person in Business in the world.
In 2010, Time Magazine included his inventions in the list of best 50
inventions, and in 2008, his robot was titled as the best robot of all times
by Wired Magazine. In 2005, he was named one of the Brilliant 5 by
Popular Science Brilliant 10. He also received a NSF CAREER award
from the National Science Foundation. He has received many more
awards and recognitions apart from the mentioned above.
Currently, he is also working on a project Google X, also known as
Google Glass. According to him, Google Glass is a wearable computer
with an optical head-mounted display. Glasses can cover vision based
digital images, called heads-up displays, and is a supreme solution.
“Google X is here to do moonshot-type projects,” Thrun said. “Not just
In 2003, Thrun joined Stanford University as an associate
professor of computer science and electrical engineering. In 2004,
he was appointed as a director of the Stanford Artificial Intelligence
Laboratory (SAIL). At Stanford University, he got involved in the
development of robot “Stanley”. He led the Stanford Racing Team,
which in 2005 won the DARPA Grand Challengeand United States
Department of Defense sponsored US$2 million as a prize, to
support development of technologies needed to create the first
fully autonomous vehicles. Later in 2007 DARPA Urban Challenge,
Thrun team's robot “Junior” received runner-up prize. From 2007
shooting to the moon but bringing the moon back to Earth.”
Prasad Pawar
Areas of Interest
Parallel computing, OS,
Algorithms, Storage and Network Security
TechTalk@KPIT, Volume 6, Issue 4, 2013
5
19
Photo Credit - Vinay Vaidya
LIDAR
VIDEO CAMERA
ULTRASONIC SENSOR
RADAR
20
TechTalk@KPIT, Volume 6, Issue 4, 2013
Seeing Through
Sensors
About the Author
Vinuchackravarthy S
Areas of Interest
Machine Vision, Image Processing,
Experimental Solid Mechanics,
and Production Engineering
TechTalk@KPIT, Volume 6, Issue 4, 2013
21
5
I. Introduction
Are you still pondering about a car which has
an autopilot option to drive and has an
independent braking system with zero
tolerance over the road accidents?
Are you thinking of a car like Bat mobile in
Batman which drives itself from parking lot
with just a message like “Please come and
pick me up at Gate-4”?
And are you thinking of a system which helps
to keep your car in a particular lane, pick your
navigation, and help you to park your car
without troubling you?
If it is so, then “Please wake up from dreaming
and see the reality”. Artificial Intelligence
Laboratory all over the world has been
working and some of them even proved
operation of the dreamed autonomous
systems in cars. In May 2012, Google
driverless car became the first autonomous
car to get a licence and hence became the first
to register this successful story in the history.
Google's unmanned car is proved to run as
good as a skilled driver and has travelled
around 50000 Kms from the day of inception to
August 2012 without any accidents [1].
Similarly, Chinese military has also claimed to
have an autonomous car that was tested for
over 100 kms [14]. Despite disadvantages like
high purchase and maintenance cost, it has
too many advantages to emphasis its impact
on the future car market and an ability to
create tough competition for the conventional
cars. With this brief other set of questions
come in mind like “How does an unmanned
car work?” and “Can an autonomous system
make driving as reliable as the one by a
human driver?” This article is intended to
provide some clarity on different types of
sensors deployed on the unmanned vehicles.
In addition to giving clarity, a brief introduction
about the autonomous car is given in the
article.
An unmanned vehicle is a vehicle controlled
remotely or capable of sensing their
environment and navigating on their own.
World's first modern driverless car (63 Km/hr)
has been developed by Mercedes-Benz and
22
TechTalk@KPIT, Volume 6, Issue 4, 2013
Bundeswehr University Munich in 1980 and
since then, more advances have been made
in robotic car technologies [2]. As of 2013,
major companies such as Mercedes-Benz,
General Motors, Google, Continental
Automotive Systems, Autoliv Inc., Bosch,
Nissan, Toyota, and Audi have developed
working prototype of autonomous vehicles
and are currently competing to commercialise
their models of fully autonomous vehicles.
Currently, different systems such as
Autonomous Cruise Control, In-vehicle
Navigation, Blind Spot Monitoring, Automatic
Parking and Traffic Sign Recognition have
been incorporated into the robotic cars and
used significantly. Each of the mentioned
systems has its own application and utilises
different types of sensors to disable human
interface in the autonomous cars.
Autonomous Cruise Control (ACC) generally
uses LIDAR or Radar to derive distance of
vehicles ahead and automatically adjust its
speed or enable brake support to maintain a
safe distance. In-vehicle navigation system
utilises Global Positioning System (GPS) for
providing up-to-date information about traffic
and automatically finds optimum way to
commute. Blind Spot Monitoring (BSM)
system uses cameras to check for any
impending collision in the blind spots while
changing lanes. Automatic Parking System
(APS) uses sensors installed on front and
back bumpers to automatically park the car
within the available space. Traffic Sign
Recognition (TSR) system utilises cameras to
identify traffic signs which are on the road and
helps the car to automatically adjust its speed
accordingly. Thus, these sensors act
analogous to eyes and ears of the driver.
Control system of autonomous car acts similar
to driver's brain which is required to operate
different sub-systems of autonomous car. This
tells us that there will not be any unmanned
vehicles without any sensors and control
system. Knowing its importance, a brief
introduction about various sensors like
Camera, LIDAR, Radar, Ultrasonic, and
Infrared sensors are presented along with a
very short description about GPS and
Carputer, a mobile computer designed to run
in cars.
frames/second and 1.5 megapixels resolution
is available for 350 USD in the market [12].
Figure 1: A robotic Volkswagen Passat
with cameras and other sensors [3]
II. Cameras and stereo vision
Cameras are the optoelectronic device used
to capture a real 3D scene into a 2D image
plane. While acquiring an image, the
continuous real scene is discretised into pixels
and hence provides information about the
scene in discretised format. Cameras are
widely used in unmanned vehicles to acquire
information about the scene and the captured
data is processed to help self-driven robotic
cars. Cameras can replicate driver's eyes in
autonomous cars. By keeping more cameras,
it can be used to identify the objects in blind
spot region (which is handled by Blind Spot
Monitoring) while changing lane as well as in
parking (which is handled by Automatic
Parking System). Some of the applications of
cameras in autonomous cars include
detecting lanes, obstacles, neighboring
vehicles, and traffic signals. Usage of
cameras provides flexibility for autonomous
vehicles because it can be used in adverse
climate. The images acquired in hazy
environment can be converted to similar
images taken in normal condition using postprocessing technique called “De-weathering”.
Similarly, there are many post processing
techniques which can convert corrupted
image into an image taken at normal
condition. Some of the post-image processing
techniques are fog, smoke and rain drop
removal, image de-blurring and low light
image enhancement. The ability to postprocess the image acquired makes the
cameras inevitable for unmanned vehicles. A
low cost scientific camera with 400
Stereo vision is a computer vision technique
which utilises two cameras facing the same
direction to produce 3D view of the real 3D
scene by just acquiring images. This
technique helps to recreate the 3D view of the
scene and facilitate the system to recognise
objects and analysis motions [4]. Hence
computer vision technique along with image
processing techniques are utilised in
autonomous car to generate 360o view of the
actual scene. This recreation of 3D view
enables the car to see everything around it
and make decisions about every aspect of
driving. As the cost of cameras is driving down,
most of companies interested in
manufacturing autonomous cars are trying to
adopt vision based system for sensing the
surroundings. As mentioned, stereo vision
setup requires two cameras mounted on a flat
plate or fixture and hence it cost depends on
the cameras opted.
III. Laser interferometry detection
and ranging technology (LIDAR)
LIDAR uses spinning lasers and photoelectric
diodes to create a virtual model of its
surrounding. It works by illuminating the scene
with laser and detecting the reflected ray using
photoelectric diode. The time taken by the
laser is determined and used to measure
distance of the object from the laser. The same
principal is followed while using the spinning
lasers and diodes to recreate the 3-D surface
of the scene (see Fig. 2). The resolution of the
reconstructed scene can be improved by
increasing the frequency of spinning lasers
and the number of lasers. The ability of the
laser to reflect back from wide range of objects
is due to high energy and shorter wavelength.
But due to its harmful effects on human eye
and high cost, this technique becomes less
preferable than camera based vision. In
market, LIDAR capable of producing 6000
points per second can be purchased for $6000
USD while 1.3 million data points per second
for $75,000 USD [8].
TechTalk@KPIT, Volume 6, Issue 4, 2013
23
5
Figure 2: 3-D view of a terrain recreated using Aerial LIDAR [5]
IV. Radar
The principal of Radar is similar to that of
LIDAR except that it uses other part of
electromagnetic spectrum i.e., Radio waves
and uses frequency change in the reflected
wave caused by Doppler effect (see Fig. 3). It
can be used to determine position of the
object. The radar can be used effectively for
long range characteristics – ranges at which
other electromagnetic wavelengths are
strongly attenuated. For example, it can be
used in adaptive cruise control to detect
obstacles up to 200 m in front of the car. The
disadvantage with Radar is that scattering
effect of radio waves highly depends on the
size, shape and material of the target. Smaller
objects reflect the original wave in similar
frequency results in inability to identify position
of the object. Similarly, objects made up of
radar absorbing material and magnetic
substances, sometimes, hinder the efficiency
of radar in finding position of the object. The
ability of radar to sense the range, altitude,
direction or speed of objects helps the
unmanned vehicle to visualise the real scene
and drive safely. Radars are available in
market at different ranges around $30 USD to
$300 USD depending upon the accuracy and
additional features [13].
V. Ultrasonic sensor
Unmanned vehicles using ultrasonic sensors
navigate similar to Bats by using ultrasound
with frequency greater than that of human
hearing range. The principal is similar to
Radar but can be used to provide proximity for
low speed events i.e., the sensors are blind if
24
TechTalk@KPIT, Volume 6, Issue 4, 2013
cars move fast. This kind of sensors can be
used for APS, low-speed ACC and automatic
door opener (opens the door when sensor
detects a person coming towards it). Although
ultrasonic-sensor technology is more mature
and less expensive than radar, car
manufacturers are reluctant to have too many
ultrasound sensor apertures visible on the
car's exterior. At present, Ultrasonic sensors
are used in conventional cars for assisting
reverse parking and the whole system cost is
around $70 USD [9].
Figure 3: Doppler Effect – The frequency of radio wave
increases when object approaches towards receiver
and increases when moves away [6]
VI. Infrared Sensor
Most of the objects near room temperature
emit thermal radiation which corresponds to
electromagnetic radiation of longer
wavelength. These radiations are not visible to
normal human eye and hence this sensor in
autonomous vehicles helps to avoid
distraction of neighboring drivers with visible
light. Unlike radar, LIDAR and ultrasonic, long
wavelength infrared called as Far Infra-Red
(FIR) sensor doesn't radiate or generate any
energy for detection purpose. Instead, it
detects infrared radiation from an object. But,
Near Infra-Red (NIR) sensors require infrared
headlights over the car to illuminate the road
ahead. The detection is possible even in the
night time and hence used in unmanned
vehicles to provide night vision (see Fig. 4) for
smart driving. Infrared camera for automobile
costs around $150 USD [11]. Thermal
cameras use infrared sensors.
GPS is a satellite based navigation system
helps to find the position and information
regarding locations anywhere on the earth
where there is clear line of sight to four or more
GPS satellites. This navigation system
provides directions and traffic congestion
maps to autonomous car for deciding the best
route to travel in short time. This also helps to
keep track of your robotic car, finding the best
place to park and also helps to find the speed
of travel in real time. GPS Navigation systems
are available from $60 to $250 USD [10].
sensors, optical cameras and Infrared
cameras have been extensively
experimented on autonomous vehicles
because of their low cost, size and versatility.
LIDAR and Radar have also been
experimented by some companies that give
more importance to reliability than aesthetic of
the car. Even though ultrasonic sensors
provide reliable measurements; they have
been utilised only for small modules in the
unmanned vehicles. Presently, the advantage
of each sensor are added by making a hybrid
system of different sensors to cross check the
measurement of individual sensors. This
hybrid system helps the robotic cars to provide
hassle-free travel. In the future, the robotic
cars would be designed in such a way that it
can interact with fellow vehicles and help to
make prior decision against the future
movement of neighboring cars. This kind of
interactive environment between unmanned
vehicles would result in reduction of number of
accidents. Even though, autonomous cars are
years far for commercial use, we believe that
they can transform society as profoundly as
the Internet and mobile phones have.
VIII. Carputer
References
The above mentioned sensors cannot be
called as “Smart sensors” without interfacing
them with embedded system or computers.
Autonomous vehicles produce lot of data and
require real time processing to make travel
safe. This requires additional processing
capability of the computer to be used with the
sensors. Specially designed mobile
computers with high processing capability to
be used in robotic cars can be categorised
under Carputer. The carputer also provides
touch screen interfaces to get inputs for the
autonomous cars from the passengers and
also for displaying information regarding the
travels.
[1] http://en.wikipedia.org/wiki/Google_driverless_car
IX. Conclusion
[12] http://www.citizensinspace.org/2012/11/low-cost-high-speedimaging-options/
Figure 4: Infrared image of a street showing
various temperature fields [7]
VII. GPS Navigation device
The principal of all described sensors might
seem simple but in unmanned vehicles they
are used smartly to provide a flawless
autonomous system. Among the briefed
[2] http://en.wikipedia.org/wiki/Autonomous_car
[3] http://flickr.com/
[4] D. B. Gennery, “A Stereo Vision System for an Autonomous
Vehicle”, International Joint Conference on Artificial Intelligence,
1977.
[5] http://www.groupeinfoconsult.com/lidar
[6] http://www.vebidoo.de/
[7] http://www.gizmag.com/
[8] T. Deyle, “Velodyne HDL-64E Laser Rangefinder (LIDAR)
Pseudo Disassemble”, 2009. Available: http://www.hizook.com/
[9] http://shopping.rediff.com/
[10] http://www.snapdeal.com/
[11] C. Vieider et al., “Low-cost far infrared bolometer camera for
automotive use”, Proceeding of international society for optics
and photonics, 2010.
[13] http://www.amazon.com/
[14] http://www.indianexpress.com/news/chinese-military-testsunmanned-smart-car/1064353/
TechTalk@KPIT, Volume 6, Issue 4, 2013
25
5
BLIND SPOT
DETECTION
BLIND SPOT
DETECTION
PARKING
ASSISTANCE
26
LANE DEPARTURE WARNING
LANE DEPARTURE WARNING
DRIVER STATUS
MONITORING
COLLISION
WARNING
PEDESTRAIN
DETECTION
TRAFFIC
SIGN
RECOGNITION
TechTalk@KPIT, Volume 6, Issue 4, 2013
Bringing
Vision to Life
About the Author
Jitendra Deshpande
Areas of Interest
Image processing,
Algorithm Development,
Machine Vision,
Driver Assist Systems
TechTalk@KPIT, Volume 6, Issue 4, 2013
27
5
I. Introduction
II. Importance of ADAS
In this era of computers, we are engulfed by
digital cameras, smart phones, tablets and
gaming gadgets. Today, almost each one of us
is living with a processor and bunch of
software in our hands. We are connected
regardless of the geography and getting jobs
done with a few clicks on our smart devices. In
last 20 years, there has been a significant
change in people's routine. People are able to
multitask; they can spawn multiple threads
and can track those very easily. The
technology has taken a giant leap and is
successful in bringing the world closer and
making life simpler. Importantly, the
technology has been able to separate out the
tasks that just need our authentication and the
tasks that need our personal presence. We
are able to save a lot of our time and efforts;
rather, we are able to utilize our time and
efforts in a better way. Hence, there is a great
acceptance to the new lifestyle by all of us.
The advancement in technology is helping us
organize better and reduce human errors and
delays. We all want to stay connected almost
all the time. No matter whether we are in office,
at home, away from home, in a public
transport or even while driving our own car, we
don't want to stay away from the network.
ADAS (Advanced Driver Assist System) has
played a significant role in realizing birth of an
autonomous vehicle. ADAS is meant for
alerting and assisting the driver during
hazardous situations. In recent years, ADAS
has grown rapidly, bringing in multiple sensors
such as camera, FIR, NIR, LIDAR, RADAR,
Ultrasonic etc. and combination of these
sensors for continuously monitoring the
motion of the vehicle as well as the movement
of the surrounding objects. ADAS provides
important information about the surroundings
and alerts the driver to make the right
decisions. The safety systems that were the
part of premium segment cars are now
becoming a part of regular passenger cars.
There is a strong push from the NCAPs (New
Car Assessment Programs) all over the world
to bring safety features on the cars. They are
playing an important role in encouraging
significant safety improvements to new car
design. On one hand, they are educating
safety awareness to the consumers and on
the other hand, they are organizing crashtests and issuing safety ratings to the vehicle.
The safety ratings are easily understood by
the consumers. They can choose a car based
on level of safety that it can provide. The safety
awareness in consumers mind has brought
OEMs' focus in this area. OEMs have to
provide safer vehicles in order to get
consumers attention. Thus car manufacturing
does not only involve traditional automotive
part makers but this has opened the auto
world to many silicon makers, sensor
manufacturers and technology providers.
Everyone has sensed big opportunity and
large market whether it's an emerging country
or developed country, sooner or later, safety
will be required by all and that too at a cheaper
price.
Today even our cars are equipped to support
connectivity along with the infotainment
devices. Whether emails, texts, calls or
navigation; we can have all those, with user
friendly interfaces (UIs), while we drive our
smart car. Now the question is whether we still
want to drive our own car or we just leave it to
an automatic pilot to take the control and drive
us to the destination? Yes, the technology
available today is ready to realize a driverless
car and there are multiple successful attempts
to prove its concept. Looking at the spread and
speed of the technology, one cannot deny the
fact that we are going to witness another
revolutionary change; this time in an
automotive industry. A driverless car, where no
one will be in a driver's seat!
Google
BMW
GM
MERCEDES
AUDI
VOLKSWAGEN
Figure 1: Companies towards development
of Autonomous vehicles
28
TechTalk@KPIT, Volume 6, Issue 4, 2013
Figure 2: Safety Performance Assessment
Programs by different countries
ADAS started to emulate driver's behaviorwhat driver can see as danger, during day time
and night time, on turnings and slopes, on high
speeds and low speeds, while changing the
lanes and parking the car. Imagining all
possible conditions wherever driver possibly
needs an attention, there exists an ADAS
feature to assist the driver. There are many
such systems available as the aftermarket
solution or in the new cars.
Radar
Lidar/
Laser
NIR/FIR
Optical
Signal
Processing
Sensing
Detection
Control
Decision
Send Control
Signal
Control
Figure 5:Typical flow of operation for ADAS
Ultrasonic
Figure 3: Commonly used Sensors in ADAS
Adaptive Front Light System
Automatic high Beam
High Beam Assist
Night vision Enhancement
Adaptive Cruise Control
Lane Keep Assist
Automatic Parking
3-D surround view
Sensor
Blind spot Monitoring
Driver Status Monitoring System
Forward Collision Warning
Pedestrian Detection System
Intersection Collision Warning
Lane Departure Warning
Reversing Collision Avoidance
Traffic sign Recognition
Figure 4: Advanced Driver Assist Systems
Existing ADAS uses different sensors and
each one has its pros and cons. Out of these,
cameras are perceived as being more popular
and reasonably reliable sensors to build ADAS
features. The reason being cameras can see
and recognize objects. There are some
specific tasks that only camera can do such as
detection of lanes, reading traffic signs,
classifying a vehicle and a pedestrian.
Whether it's a vehicle, pedestrian, traffic sign,
lanes, etc. camera can detect and classify
them. Other sensors can detect some of these
objects, rather more precisely in terms of
distance and consistency, but can't recognize
them as a particular type. Thus, with the fusion
of camera with other sensors most of the
driving scenarios can be sensed and relevant
information can be provided to the driver.
So far what we have talked about in ADAS was
about 'sensing' and now the second important
thing in driving is the 'control'. Based on the
alerts received from the sensors, the driver
controls longitudinal and latitudinal movement
of the car. He performs different operations
like, decelerates the engine speed, applies
brakes, controls steering, etc. However, the
control depends entirely on his personal
judgment about how much of a deceleration is
required, the amount of brakes to be applied
and how much a steering wheel to be turned.
This, however, does not save the driver from
accidents every time, as the decision made by
the driver to control the vehicle is based on his
experience, reflexes, mental state and the
type of vehicle, brakes and engine. The
reaction from the driver may not be
appropriate and it varies time to time and from
driver to driver.
ADAS extends its support in controlling the
vehicle by reducing human intervention in
case of an emergency. Systems like AEB
(Autonomous Emergency Braking), ACC
(Adaptive Cruise Control), and LKA (Lane
Keep Assist) are safety critical systems that
take the control of the vehicle. This extension
of ADAS, to certain extent, has been
successful in reducing fatalities. Control is
crucial and is designed to avoid collisions or
reduce the impact in case of inevitable
collisions.
III. Towards an Autonomous
Vehicle
ADAS with the coordination of sensors and
the control mechanism has contributed to
providing eyes and brain to the car; an aid that
can sense the situation and react by
controlling brakes, powertrain, chassis and
infotainment. This amazing coordination has
paved the ways to realize a car without a driver
and give a strong belief that not only in case of
hazardous situations but also under normal
circumstances a car can be operated
automatically and does not need a driver.
However, there are more challenges to make
a fully autonomous vehicle than providing
assistance to the driver.
Nissan has recently announced the plan to
launch their autonomous vehicle by 2020.
Steve Yaeger, a Nissan spokesman, also
iterated “providing assistance when a driver
fails to react is a technical challenge, but
developing a foolproof artificial intelligence
system that can make all driving decisions is
far more complex” [1]. It clearly indicates that
the transition, from ADAS to an autonomous
car, is not going to be an easy one. However,
ADAS has provided a technology to an
automotive world that is leading us to an
autonomous vehicle.
A report from Navigant Research has
predicted “the first autonomous car sales to
take place in 2020 and growing to over 95
million vehicles some fifteen years later,
representing around three quarters of all light
vehicle sales in 2035” [2].
An autonomous vehicle uses information
coming out of cameras, Infrared, LIDAR,
RADAR, other vehicle sensors and global
positioning sensors to maneuver the vehicle
TechTalk@KPIT, Volume 6, Issue 4, 2013
29
5
on road. The technology that is developed in
making ADAS, directly or indirectly,
contributing to the development of a driverless
car and importantly further development of
ADAS will create an autonomous
environment.
I V. W o r k d o n e
Technologies
at
Vision Systems under development
l
FCW - Forword Collision Warning
l
LDWS - Lane Departure Warning System
l
TSR - Traffic / Road Sign Recognition
Successfully Integrated complex systems
l
ACC - Adaptive Cruise Control
l
LKA - Lane Keep Assist System
KPIT
Successfully developed and delivered
complex vision systems
l
NVPD - Night Visison with Pedestrian
Detection
l
ABC - Advance Beam Control
KPIT, as a technology provider, is making
efforts in developing technology required for
ADAS. In last 5 years, KPIT has developed
multiple ADAS features, published 28
technical papers and filed for 9 patents.
Research at KPIT has advanced to deal with
multiple hurdles such as sensor variability,
real life challenges, and performance issues.
It has led to the development of new
technology, sensor know-how, object
detection algorithms, hardware optimization,
control algorithms, multicore expertise, critical
test methodologies and vehicle integration.
Camera based Solution
Figure 7: ADAS Development at KPIT
KPIT has setup a vehicle that is used to collect
data in India. It has also collected test data in
other countries. This data is being used to
carry out the lab tests of the algorithms,
understand and handle peculiar use cases. A
dedicated team of engineers has successfully
handled multiple challenges in implementing
these systems on a passenger car in a real
time environment and is working towards
making these systems operational under
different lighting, weather, road and traffic
conditions. KPIT's advanced research team is
also keeping an eye on future challenges and
has developed technologies for Rain Drop
Removal, Video Stabilization, Noise Handling,
Image Enhancements and multicore
migration tool.
Figure 6: Major Challenges while developing
Camera Based Driver Assist System
Features such as Pedestrian Detection,
Forward Collision Warning, Lane Departure
Warning, Blind Spot Monitoring, Traffic Sign
Recognition, Advanced Beam Control and
Driver Status Monitoring constitute major
portion of a complete ADAS system. Our focus
has been on development of all these
features. These algorithms are configurable
for different vision sensors and are
independent of the hardware platforms.
Different countries have different traffic
patterns, road and lighting conditions.
Therefore, rigorous testing under multiple
scenarios and tuning of algorithms is very
important in order to ensure stable behavior of
the algorithms.
ADAS
Light Control
Collision Avoidance
FCW
LDW
30
TSR
KPIT has also developed configurable
software algorithms for Vehicle Control. These
algorithms are compliant with AUTOSAR and
they are used in ACC (Adaptive Cruise
Control), LKA (Lane Keep Assist) and BSD
(Blind Spot Detection). The Architecture and
Verification strategies of these algorithms are
in compliance with the ISO 26262 Functional
Safety Standards and also support future
sensor fusion applications. Leveraging on its
spread and success in areas of automotive,
KPIT is also providing integrated solutions of
ADAS with infotainment, chassis and brakes.
ACC
LKA
NVPD
TechTalk@KPIT, Volume 6, Issue 4, 2013
ABC
V. Patents filed by KPIT
Keeping its focus on research, innovation and
frugal engineering, KPIT has filed number of
patents and publications in this area.
Table 1: List of patents filed by KPIT on ADAS
Sr. No.
Patent Title
1
Method and System for pedestrian detection using Wigner
Distribution
2
Method and system for image enhancement using Wigner
distribution
3
A System for Real-Time Image Correction for Curvilinear
Surfaces
4
Pedestrian Detection and Tracking System
5
An Image Enhancement and Pedestrian Detection System
6
A System For Detecting, Locating And Tracking A Vehicle
7
System and Method for Depth Imaging
8
A System and Method for Performance Characterization
9
Straight Line Detection Apparatus and Method
KPIT has gained valuable experience while
developing these systems from concept level
to the production level and has proved a right
partner for many OEMs and Tier-1s in the area
of ADAS.
will continue to grow and will act as a stepping
stone for future cars. Very soon we will see
Advanced 'Driver' Assist Systems, no longer
be assisting the drivers but driverless cars!
References
VI. Into the future
The journey into the world of autonomous
vehicle through ADAS will continue to
challenge the industry, transport and safety
administrative bodies, researchers and
engineers in many ways. It would be
interesting to see how it will attract consumers
and the shareholders. From the fusion of
sensors to the control of multiple ECUs, from
development of platform to the optimization of
various critical resources, the technology is
advancing very fast. Many technology
providers are in the process of developing and
expanding their ADAS technologies for a
driverless car. Media is also upbeat about the
technology. Whether it's news from Google
about launching of Robo-Taxis [3] or
announcement of Singapore's first electric
autonomous vehicle NAVIA [4], we have seen
the glimpses of future transportation. ADAS
[1] Paul Stenquist, “Nissan Announces Plans to
Release Driverless Cars by 2020”, August 29,
2013. Available: http://wheels.blogs.nytimes.com/
2013/08/29/nissan-announces-plansto-release-driverless-cars-by-2020/?_r=0
[2] http://www.navigantresearch.com/newsroom
/autonomous-vehicles-will-surpass-95-million-inannual-sales-by-2035
[3] http://www.indiatimes.com/boyz-toyz/cars-andbikes/a-fleet-of-robotaxis-to-drive-you-into-thefuture-97607.html
[4] http://www.asianscientist.com/tech-pharma/ntujtc-navia-first-electric-autonomous-vehicle-2013/
TechTalk@KPIT, Volume 6, Issue 4, 2013
31
5
Control Motor
and Sensor
for Steering
Brake and Parking Brake Systems
Automated
Gear Shifter
32
TechTalk@KPIT, Volume 6, Issue 4, 2013
Drive-By-Wire :
A KPIT Case Study
About the Author
Cdr (retd) Vinode Singh Ujlain
Areas of Interest
Systems Engineering,
Applied Systems R&D,
Open Source (Php / MySql)
Isolation Valves (Brake Hydraulics)
TechTalk@KPIT, Volume 6, Issue 4, 2013
33
5
I. Introduction
Future cars may look like some kind of video
game! If we have to accelerate or apply
brakes, or steer the car, all of it could be done
through a joystick. Drive-By-Wire (DBW) will
make this a reality. DBW is a technology that
depends on electronics to perform steering,
braking and acceleration. This is similar to FlyBy-Wire (FBW) technology [1] which has been
used in airplanes since 1960s. This article is
based on the work undertaken by KPIT for a
client wherein the requirement was to convert
an existing ground vehicle into a remotely
controlled vehicle. In any remotely controlled
ground vehicle, there are two independent but
tightly coupled systems: Embedded logic, and
Drive-By-Wire (DBW) Electronics. Embedded
Logic delves into realms of Artificial
Intelligence (AI) which is used for sensing
obstructions, terrain, and identifying and
marking the waypoints. DBW Electronics
accepts commands from Embedded Logic
and interacts with various actuators, which in
turn drive the peripherals within vehicle and
render positional/ status feedback for close
loop control. This article pertains specifically
to drive by wire electronics. This can be
considered as a precursor of future
technology where DBW technology [2], in
future, can replace mechanical and hydraulic
systems that exist in today's cars with electromechanical and electro-hydraulic actuators.
II.
Brake-by-wire: Today’s cars use hydraulic
and mechanical linkages to transfer braking
force. On contrary, Brake-by-wire systems
use electric motors to extract braking force.
Steer-by-wire: In currently existing cars,
motion of the steering wheel is transferred to
wheels of the vehicle through several
hydraulic and mechanical linkages. Whereas
in steer-by-wire systems, sensors detect
motion of steering wheel which then sends the
information to microprocessor. The
microprocessor would, in turn, sends
commands to actuators to turn the wheels
accordingly.
Why DBW?
Components in cars such as brake booster,
steering column, steering shaft, rack-andpinion gear, and various hydraulic lines
ensures good driving conditions. However,
these components increase the weight of the
car significantly and these can degrade with
time as well. That's why we need DBW!
III. Types of DBW
DBW systems are of three types: throttle-bywire (or accelerate-by-wire), brake-by-wire,
and steer-by-wire [3].
Throttle-by-wire: This is the first DBW
system. It exploits pedal unit and engine
management system. Sensors of pedal unit
measures the extent to which the accelerator
is moved and sends the same to engine
management system. The engine
34
management system, in turn, computes the
amount of fuel required to achieve the desired
acceleration and sends that information to
actuator which realizes desired mechanical
motion.
TechTalk@KPIT, Volume 6, Issue 4, 2013
Figure 1: Types of DBW systems [4]
IV. Existing DBW systems
DBW has been quoted as “technology of the
future” for over a decade now and Nissan
would launch DBW application in its Infiniti
Q50 sedan. TRW automotive has built its first
steer-by-wire concept car 11 years ago.
General Motors has manufactured a concept
vehicle of a drive-by-wire system in 2003.
BMW, Mercedes-Benz, Land Rover, Toyota,
Volkswagen also implemented DBW [5, 6].
Renault has implemented steer-by-wire. DBW
systems currently exist in equipment such as
tractors and forklifts. However, there is lot of
redundancy [7] in current DBW systems and
they are expensive too. Though DBW systems
can decrease weight of the car and increase
the operational accuracy, it is hard to convince
the driver that the car is safe. Because
software can fail irrespective of how many
times it has been tested [8]. Thus current DBW
faces lot of challenges.
V. Flow and architecture of the
system
In the case under reference, we undertook a
plan for the development of a remotely
controlled ground vehicle in following steps:(a) Automate all actuators inside vehicle and
incorporate requisite electronics for positional
/ status feedback and also provide minimal
embedded software inside this DBW
electronics.
VI. Drive by Wire Electronics
As mentioned earlier, this article pertains
specifically to drive by wire electronics. Here,
all vehicle actuators were augmented with
electro-mechanical or electro-hydraulics
actuators in order to replace the in-vehicle
driver actions and also incorporate sensors to
replay feedback. Adequate design care was
taken to ensure that vehicle could continue to
operate in both situations: minimal intrusion
into vehicle to ensure driver in control, and
minimal intrusion into vehicle to ensure driver
under remote control. This design uses DBW
electronics controlling all actuators and
relaying feedback to remote control. Towards
(b) Control DBW hardware through a human
operator who relies on camera feed for
presenting situational awareness around the
vehicle.
this, following features were added to the
existing vehicle.
Table 1: Features added to existing ground vehicle
(c) Replace human operator with requisite
artificial intelligence to take over complete
vehicle operations.
Figure 2 depicts the block layout of the overall
system architecture. The system consisted of
following:(a) Vehicle with two cameras (a front and a
rear camera) and has a wireless link of ~ 10
Km range mounted on it. DBW electronics is
housed inside the vehicle.
(b) Remote cockpit with identical vehicle
controls and a display that presented remote
situational awareness as live video feed from
the controlled vehicle.
Feature
Ignition
Throttle
Wheel direction
Wheel magnitude
Gear
Brake
Local / Remote
Emergency
Parking brake
Remote Horn
Description
Remote start / stop ignition
Remote speed control
Left / right wheel
Steering control up to 2.8
(~ 1000 degree) turns either side
Gear change possible based on
remote command
Full range of brake control
Command to tell resident
electronics inside vehicle to take
over control
If emergency, gradually reduce
throttle and apply brake
Apply parking brake
Pedestrians aren't used to
an UGV
The complete Drive-by-wire system
is mounted on a single 3U-rack [9]. This
3U-rack comprises of seven PCBs. The
communication between wireless and this
electronics is over RS232 [10], which is a
universal serial communication protocol. The
wireless system provides a 15 byte frame in
which various commands for individual
systems are incorporated.
Figure 2: Architecture of Overall system
TechTalk@KPIT, Volume 6, Issue 4, 2013
35
5
through suitable capacity Solid State Relays).
This H Bridge thus enables direction control.
The desired steering position is achieved
through following sequence:(a) Based on correct steering position and
command, the corresponding H bridge arm is
enabled to turn motor right / left for 10 msec.
Figure 3: Remote control electronics subsystems
In order to prevent any false command
attributable to data corruption, this frame has
built in checksum error detection feature. This
electronics has two separate controllers to
share the task of controlling actuators i.e. the
data packet containing wireless commands is
fed simultaneously to both controllers. These
controllers react to specific byte sequences.
Figure 3 depicts all subsystems.
(b) After 10 msec run, positional feedback is
sampled using the 4-20 mA loop shaft position
sensor. Since the sensor is noisy, a number of
samples are taken to eliminate noise. Repeat
with motor running till steering shaft is within
acceptable defined positional accuracy
(embedded in the code).
Compact
RIO
+12V DC
Supply
Power
drive
-5V to +5V
A. Remote Ignition/Vehicle control: This
subsystem automates the ignition controls in
the vehicle. To crank the engine, the crank
signal needs to be held logical high (~5V) and
then should be brought to logical low (~0V).
This was achieved by using suitable
electromechanical relay with timing control
provided by the embedded controller i.e. this
served as a parallel ignition option in addition
to normal cranking operation through turning
of the key.
B. Remote Throttle: This subsystem
automates throttle operation using an
electronic drive circuit. The vehicle
implements an electric throttle whereas the
operation of throttle lever generates two
proportional analog electric signals. These
signals are input to engine ECU within the
vehicle. The remote throttle circuit
proportionally generates two similar
proportional analog signals using ADC in
response to remote throttle command of 0 to
255 steps.
C. Remote Steering System: This system
automates steering operation using a DC
motor mounted within the steering column by
cutting a section of the shaft to accommodate
this axially aligned motor. This DC motor is
controlled through an H bridge (implemented
36
TechTalk@KPIT, Volume 6, Issue 4, 2013
Absolute
Angle
Encoder
Remote/Local
Selection
Mechanical
connection to
existing steering
column
Motor
Control
Circuit
Figure 4: Remote steering system
D. Remote Parking Brake: This subsystem
automates parking brake operation using a
motorized screw linear actuator. The
extension/retraction of this actuator pulls or
releases parking brake cables. The actuator
electric motor is electronically controlled to
achieve park/release operations. A closed
loop control system has been implemented
using limit switches to achieve remotely
commanded parking brake operation.
E. Remote Service Brake: This subsystem
automates brake operation using a motorized
screw linear actuator, a linear displacement
sensor, an additional master cylinder and
vacuum booster. The extension/retraction of
this actuator activates additional master
cylinder connected to existing brake hydraulic
lines from vehicle's master cylinder. The
actuator electric motor is electronically
controlled to achieve brake operations. A
closed loop control system has been
implemented using linear displacement
sensor to achieve remotely commanded
brake operation. The system operates the
brake in proportion to commands in form of
digital inputs 0 to 255 (no brake to full brake).
The additional vacuum booster receives
engine vacuum. Four valves are used to
switch the hydraulic circuits for remote or local
operation.
DRIVERS
BRAKE
REMOTE
BRAKE
HYDRAULIC
MOTOR
HYDRAULIC
MOTOR
Manual Brake fluid
valve
Manual Brake
fluid valve
Manual Brake fluid
valve
Manual Brake
fluid valve
To car brake System
Figure 5: Remote service brake system
F. Remote Gear Shift: This subsystem
automates gear shift operation using a rotary
servo actuator tightly coupled with the gear
shift lever so that gear would be operated both
by driver and remotely. The servo actuator
activates vehicle gear shift knob directly. The
servo has been housed inside the shift lever.
The servo actuator is electronically controlled
to achieve gear shifts. Depending upon the
desired gear, a suitable PWM is generated to
control the position of this DC servo motor.
VIII. Conclusion
This work aims to replace mechanical and
hydraulic actuators of current cars with
corresponding electronic control systems
thereby reducing human intervention. For
that, features such as ignition, throttle, wheel
direction, wheel magnitude, remote horn,
parking brakes are added. Checksum error
detection feature is also included to avoid
false commands which result from data
corruption. All these features are implemented
through different subsystems as mentioned
earlier. The complete system is mounted in a
single 3U-rack which consists of seven PCBs.
In autonomous vehicles perspective, each
type of DBW system has its own advantages.
Throttle-by-wire can be used for vehicle
propulsion. Brake-by-wire provides much
better stopping distances than current
systems. Steer-by-wire provides lot of space
by eliminating the need for steering column.
Overall, it has been a very challenging and
thus rewarding project.
References
Figure 6: Remote Gear Shifter
VII. Testing of DBW electronics in
lab
Physical interface of DBW electronics with
wireless and vehicle actuators would have
meant time taking interface exercise at client
site. To shorten the interface exercise, a data
packet emulator was designed in-house to
simulate inputs to DBW electronics and
monitor the corresponding output signals.
This emulator not only helped to prove
electronics in lab conditions but also allowed
fine tuning of embedded algorithms prior
actual interface with vehicle. This emulator
was also used extensively during integration
of the DBW electronics with vehicle at client
site.
[1] Steve Rousseau, “Nissan Will Put Drive-By-Wire in
2013 Cars”, October 17, 2012.
Available:
http://www.popularmechanics.com/cars/news/autoblog/nissan-will-put-drive-by-wire-in-2013-cars 13818193
[2] “Drive by Wire”, 2013.
Available: http://en.wikipedia.org/wiki/Drive_by_wire
[3] John Fuller, “How Drive-by-wire Technology Works”.
Available: http://auto.howstuffworks.com/car-drivingsafety/
safety-regulatory-devices/drive-by-wire.htm
[4] Sohel Anwar, Bing Zheng, “Fault Tolerant Control of
Drive-By-Wire Systems in Automotive / Combat Ground
Vehicles for Improved Performance and Efficiency”.
Available: http://groups.engin.umd.umich.edu/vi/w5_
workshops/sohel.pdf
[5] “Drive by Wire Technology”.
Available: http://www.team-bhp.com/forum/technicalstuff/63437-drive-wire-technology.html
[6] “What does drive by wire mean with regards to cars?”
Available: http://uk.answers.yahoo.com/question/index?
qid=20070703090813AAKcICZ
[7] Ian Austen, “Drive by Wire, an Aerospace Solution”,
March 29, 2013.
Available:
http://www.nytimes.com/2013/03/31/automobiles
/drive-by-wire-an-aerospace-solution.html?_r=0
[8] Jeremy Laukkonen, “What is Drive-By-Wire
Technology?”
Available: http://cartech.about.com/od/Safety/a/What-IsDrive-By-Wire-Technology.htm
[9] IBM 3000 VA LCD 3U Rack UPS.
Available: http://www-03.ibm.com/systems/x/options/
rackandpower/ups3000va/index.html
[10] The RS-232 Standard.
Available: http://www.omega.com/techref/pdf/RS-232.pdf
Figure 7: Emulator connected to DBW electronics
TechTalk@KPIT, Volume 6, Issue 4, 2013
37
5
38
TechTalk@KPIT, Volume 6, Issue 4, 2013
Wired Through
Wireless
About the Author
Arun S. Nair
Areas of Interest
In-vehicle Networking
& Embedded Systems
TechTalk@KPIT, Volume 6, Issue 4, 2013
39
5
I. Introduction
the vital role of having the safe driving.
Traffic accidents are the major killers, even
more than any deadly diseases or natural
disasters. With efficient traffic management
systems, it is possible to reduce accidents at a
large extent. Connecting vehicles with
environment improve existing features of car
with precise information. Thus, it helps to
reduce traffic jams and accidents. In addition,
sick, aged people and the reckless youth are
the current drivers making it mandatory to
have the increased need of secure driving
ways. Self-driving intelligent vehicles reduce
possibility of accidents that are due to human
error. Thus, there is a great interest in
academia and automakers to roll out selfdriving vehicles. It also reduces the unwanted
use of human potential as driver, the
redundant traveler.
II. Dedicated Short Range
Communications (DSRC)
Standards
We come across many such self-driving or
autonomous vehicles lately, for eg. NAVIA,
shown in Fig. 1, arguably first of its kind in
Singapore, that will be shuttling between
Nanyang Technological University (NTU) and
JTC Corporation's (JTC) Clean Tech Park [1].
Figure 1: NAVIA – Autonomous Vehicle [1]
Google, adominant search engine provider is
working on its own version of autonomous
vehicles. This created a great buzz in market.
BMW, Mercedes and Volvo are also actively
participating on the development of
autonomous vehicles. Few of these
manufacturers had already announced partial
features implemented into commercial
vehicles such as traffic based acceleration
and deceleration, pedestrian protection etc.
According to the list given in [9] such partial
self-driving features are available in Lexus LS,
Volvo S60, Mercedes S-Class and Infiniti M.
The goal of self-driving system is to drive
without driver from one location to another in
safe and efficient manner, by dealing with
external environment and internal conditions
of the car. The internal condition includes
whether driver is sleepy or in drunken state
and the driving conditions of the vehicle. In
addition, the external environment also plays
40
TechTalk@KPIT, Volume 6, Issue 4, 2013
Communication plays an important role of
connecting a vehicle to another vehicles or to
environment. Dedicated short-range
communications are duplex or simplex short
to medium-range wireless communication
channels specifically designed for automotive
use and a corresponding set of protocols and
standards. There are two categories of DSRC:
Vehicle-to-Vehicle (V2V) and Vehicle-toInfrastructure (V2I) communication.
Various standards of DSRC standard program
and the emphasis of these standards are on
public safety applications. The major
standards and its bodies include ISO-TC204,
WG15 -OSI Layer 7, WG16 -air interface,
CEN-Layer 1, Layer 2, Layer 7, ARIB T55
(Japan) and various standards published at
North America from ASTM, IEEE, ISO, SAE,
AASHTO and ITS America.
In October 1999, the Federal
Communications Commission (FCC) in the
USA allocated 75MHz of spectrum in the 5.9
GHz band for DSRC by Intelligent
Transportation Systems. In August 2008, the
European Telecommunications Standards
Institute (ETSI) in Europe has allocated 30
MHz of spectrum in the 5.9GHz band for
intelligent transport systems (ITS). In
Singapore, Europe and Japan, DSRC
technology used for tolling or the road use
measurement.
European standardization organization CEN
developed EN 12253:2004 EN, 12795:2002,
EN 12834:2002, EN 13372:2004 and EN ISO
14906:2004. Each of these CEN standards is
for layers of ISO OSI model communication
stack.
Let us look at the external environment
scenario of a self-driving or connected
vehicle, explained in [7].
A. Forward Obstacle Detection and
Avoidance
In this application, traffic information or
accident warnings can warn the driver of
possible dangers such as obstacles or road
hazards or maintenance in progress by
passing those data from vehicle to vehicle, as
shown in Fig. 2.
FORWARD OBSTACLE DETECTION
AND AVOIDANCE
Information about accident or
traffic sent back to following
vehicles using DSRC
Figure 2: Forward Obstacle Detection and Avoidance [7]
B. Approaching Emergency Vehicle
Warning
Vehicle to vehicle communication shall
provide the information about an approaching
emergency vehicle through traffic, as shown in
Fig. 3. This would assist in clearing the street
for the emergency vehicle thereby reducing
the risk to other vehicles.
Approaching Emergency
vehicle warning
Information about approachin
emergency vehicle sent ahea
through vehicles using DSRC
Figure 3: Approaching Emergency Vehicle [7]
C. Cooperative Adaptive Cruise Control
When the car approaches a sharp curve, the
communication system fitted with line of sight
of radar warns the adaptive cruise control
system of any slow moving vehicles just
around the turn, as shown in Fig. 4.
Figure 5: Self-Driving Requirements [8]
This requires elaborate facilities within
vehicles (in-vehicle domain), neighboring
vehicles (ad-hoc domain), the roadside
equipment and other infrastructure
(infrastructure). Application unit and on-board
unit will be fitted within the cars and
neighboring cars and it constitutes the invehicle domain. The in-vehicle domain with
the special fitted sensors, antennae and IP
based communications between vehicles
provides the ad-hoc domain. This setup
facilitates the car-to-car communication.
Similarly, smart toll stations, intelligent gas
stations, roadside antennae, internet and
corresponding servers and communication
technologies including IP based
communication between roadside or hotspots
constitute the infrastructure domain.
DSRC
Adaptive
Cruise
Control
Figure 6: Vehicle-to-Vehicle and Vehicle to
Infrastructure Communication [4]
Cooperative Adaptive Cruise Control
Figure 4: Cooperative Adaptive Cruise Control [7]
Self-Driving vehicle systems shall monitor
adverse weather and hazardous driving
conditions. Such a car requires V2I
communication to acquire information from
weather stations and traffic agencies. It also
requires V2V communication to acquire
information about the driving experiences of
road by another vehicle such as a loss of
traction (due to water or ice).
Thus various needs and the usage of selfdriving vehicles and few set of useful features
are depicted in Fig.5.
Thus, the intelligent transport systems boost
the tremendous research on needs of vehicleto-vehicle (V2V) and vehicle to infrastructure
(V2I) communication.
III. DSRC Communication
Challenges
The self-driving technology will reduce
accidents through communication as well as
direct vehicle control and will open up wide
range of infotainment area by connectivity.
The most important goals of V2V and V2I
communication are the transfer of trustworthy
and correct information, extreme robust
TechTalk@KPIT, Volume 6, Issue 4, 2013
41
5
information flow and maintaining the privacy
of users. Hence, autonomous vehicles share
a set of following challenges while considering
their communications strategy.
family by IEEE named it as Wireless Access in
Vehicular (WAVE) and it supports dedicated
short-range communication (DSRC) too. This
works on 5.9GHZ frequency band. WAVE
enlists two modes of communication
l
Safety applications (NON IP)
l
Non Safety applications based on IPV6
The information flow such as forward warning
or obstacles or deceleration should reach the
successor car within the stipulated time,
otherwise it is going to be catastrophic. Thus,
self-driving technology has stringent delay
requirements and thus the protocol should
support the same.
IV. Vehicle Ad-hoc Networks
The popular architecture of VANETS (Vehicle
Ad-hoc Networks) includes Wireless Access
in Vehicular (WAVE) by IEEE, Continuous Air
Interface for Long to Medium Range (CALM)
by ISO and Car-to-Car Network (C2CNet) by
C2C consortium.
Further, in this article we shall discuss salient
features of these architectures.
A. WAVE (Wireless Access in Vehicular)
A complete protocol stack of 1609 protocol
42
TechTalk@KPIT, Volume 6, Issue 4, 2013
Security (1609.2)
Channel Coordination (16094)
The solution for these communication
challenges is to have the involvement of
various technologies such as internet,
security, encryption, wireless and radio
technologies etc. Thus, Vehicle Ad-hoc
Networks (VANETS) is arrived to overcome
these communication challenges. The multihop ad-hoc communication standards are not
specific to any certain application area and
based on current available wireless LAN radio
technology with suitable adaptations.
Facilities (1609.6)
Layer Management (1609.5)
It is important to notice that the accuracy and
correctness of data is vital to have safer
driving. Above all, secure and encrypted data
communication is needed to avoid any
unintended threats from external sources.
Applications (1609.1)
WAVE Station Management
The information such as traffic management
systems, GPS based traffic movement or
weather forecast information should follow to
the vehicle to have smooth driving
experience. Nevertheless, the information
from various sensors and infrastructure needs
to be available for processing such as either
co-operative cruise control, GPS based traffic
movement or infotainment needs. Thus, it
requires the high volume data transfer within
the limited processing and response time and
it requires having high data rate
communication.
IEEE802.11p protocol based Microcontroller
Abstraction layer (MCAL) uses WAVE
architecture and IEEE802.11p is the approved
amendment of IEEE802.11 to have wireless
access in vehicular environments.
IEEE802.11p task force formed in Nov. 2004
and the final amendment was available by
2010. This protocol consists of internet
technologies and IEEE802 (IEE802.11p,
IEEE802.11 and IEEE802.2).These protocols
serve as access point to the external world.
Transport & Network Layer
(1609.3)
UDP
TCP
Ipv6
WSMP
Logical Link Sub-Layer (802.2)
MAC Layer Sub-Layer (802.11)
Physical Layer (802.1p)
Figure 7: WAVE Architecture [3, 5]
WAVE architecture is based on complete
standard of IEEE1609 and there are six sub
standards as IEEE1609.1 to IEEE1609.6 as
part of IEEE1609. Each of those substandard
describes as [3]:
•
IEEE1609.1 for the management activities to
achieve the proper operation
•
IEEE 1609.2 for the communication security
•
IEEE 1609.3 for transport and network layer
handling of traffic safety related applications –
WSMP (Wave Short Messages Protocol).
•
IEEE 1609.4 defines the coordination
between the multiple channels of spectrum.
•
IEEE 1609.5 deals with Layer management
•
IEEE 1609.6 offers application facility layer
situated between transport and application
layer.
As IEEE WAVE, architecture restricts the
MicroController Abstraction Layer (MCAL)
with this only option – IEEE802.11p and it
restricted the research or usage of alternative
Microcontroller Abstraction Layer (MCAL) of
WAVE architecture.
B. CALM (Continuous Air interface for
Long to Medium range)
For VANET, ISO CALM is designed to use
heterogeneous communication network to
provide continuous communication.
CME
Non
CALM
CALM
IP
Network Layer
NME
GEO
FAST ROUT
ING
UDP
TCP
Ipv6
Interface Layer
IME
V. Work done at KPIT Technologies
CALM Service Layer
OTHER
Management Plane
Management Information Base
Applications
CALM
FAST
The features of C2C includes fast data
transmission, transmission of safety and nonsafety messages and different short range
wireless LAN communication including
I E E E 8 0 2 . 11 p , t r a d i t i o n a l w i r e l e s s
technologies IEEE802.11x and radio
technologies such as GPRS or UMTS.
Unlike WAVE or CALM, C2CNet supports the
network and transport layer for safety
applications. For non-safety applications,
traditional TCP/IP is used.
Wireless
Wired
Figure 8: CALM Architecture [3]
CALM standard also uses 5.9 GHZ frequency.
For short and medium distances, CALM uses
Infrared and for long distances GSM, UMTS
and similar technologies are considered.
CCME (Car-to-Car Management Entity)
provides flexibility and adaptability features
and consists of CALM interface manager for
monitoring and storing the status of
communication interface. CALM Network
Manager is the process to hand over to
alternate media and CALM/Application
Manager ensures the application
transmission requirements. As CALM consists
of different technologies, so the
implementation and interface design is going
to be difficult.
According to the marketing research and
survey done by KPMG.com [2] on various
stakeholders about DSRC, 'any company
remaining complacent in the face of such
potentially disruptive change may find itself
left behind, irrelevant'. KPIT had started the
research into DSRC at its infancy stage itself,
especially on DSRC software stack
development. KPIT is the only company
conducting R&D in this area without
Government funding.
KPIT demonstrated the car-to-car
communication using WAVE architecture. The
part of this activity, KPIT had carried out the
simulation studies using Network Simulator
(NS2) and Simulink software and developed
specifications on the gateway between
DSRC/Wireless networks to CAN specific
vehicle network.
A. Car-to-Car communication setup
Let us look into the setup used for the
demonstration of car-to-car communication
using demo cars at KPIT. This prototype was
developed by KPIT.
C. C2NET (Car To Car Consortium)
Car-to-Car Consortium aims to have a
European standard for Car-to-Car
communication and recommended by
European car industry. This protocol is also
used for both safety and non-safety
applications.
Information Connectors
Applications
UDP
Car2Car Transport
TCP
Figure 10: KPIT's Demonstration of Car-to-Car communication
Ipv6
Car2Car Net
Other LLC
802.11 a,b,g
LLC
Car2Car LLC (802.2
European)
Other MAC
802.11 a/b/g
MAC
Car2Car MAC
(802.2 European)
Other PHY
802.11 a/b/g
PHY
Car2Car MAC
(802.11p European)
In this setup, based on sensors fitted on the
vehicle detects collision and using DSRC AdHoc network setup, the propagation of this
collision data can be carried out to the second
car, where only antennae was mounted. Both
these vehicles activate brakes and cutoff
electric motor based on the gateway
information available from respective vehicle
Figure 9: C2CNet Architecture [3,4]
TechTalk@KPIT, Volume 6, Issue 4, 2013
43
5
CAN bus of each car. For this setup, controller
MPC5121E and Atheros chipset are
considered for developing DSRC application
framework and DSRC platform.
Figure 11: Hardware Interface
B. DSRC Software
KPIT’s DSRC software consists of software
platforms developed by KPIT such as DSRC
WAVE device software, ENOS Stack,
Middleware software, TCP/IP stack and HMI
application.
VI. Conclusion
Even though WAVE, CALM and C2CNet are
dominating architectures, other architectures
for intelligent transport systems include
M A N E T, N O W, C O M e S a f e t y, C V I S ,
SAFESPOT, COOPEERS, GST, GeoNet,
FleetNet, GrooveSim, CARLINK,
CarTalk2000, etc. The amalgamation of
various technologies and need of high cost
infrastructure of sensors, actuators etc.,
makes it a tough job to have reliable
commercial use of intelligent transport
autonomous vehicles. In-spite of this,
according to kpmg.com [2], sufficient built-in
and after-market penetration is expected
which can support self-driving applications
expected to be available by 2025. All in all, we
can all look forward to driverless taxis picking
us from airport ergo no haggling for prices.
REFERENCES
[1]
[2]
[3]
Figure 12: KPIT's DSRC stack
The part of activities at KPIT include
integration of software modules, the testing of
stack on demo cars and the gathering of the
test results at field-testing as well as test bed
using DSRC units (Onboard Unit and Road
Side Unit). HMI application developed using
Qt, WinCE and Flash.
KPIT Wave device consists of platform
software for network layer (IEEE 1609.3) and
lower layers (IEEE 1609.4 and IEEE 802.11p)
of WAVE architecture. This software with
Atheros chip set provides the wireless network
communication facility. The middleware
software includes RTP protocol and GPRS
interface. KPIT’s ENOS stack is used for the
communication facility to vehicle CAN
network. ENOS consists of CAN and LIN
drivers to provide the communication facility.
ENOS with other device drivers PCI, USB and
SPI provides the complete set of device
drivers. This software is implemented and
tested on WinCE, QNX and different LINUX
kernels (e.g. Fedora) operating systems.
44
TechTalk@KPIT, Volume 6, Issue 4, 2013
NAVIA: Singapore's First Electric Autonomous
Vehicle, Asian Scientist Magazine, August 19,
2013.
Self-driving cars: The next revolution, KPMG
report, 2012.
Sajjad A. Mohammad, Asim Rasheed, Amir
Qayyum, “VANET Architectures and Protocol
stacks: A Survey”, Proceedings of third
international workshop,
Nets4Cars/Nets4Trains 2011,
Oberpfaffenhofen, Germany, March 23-24,
2011.
[4]
[5]
[6]
[7]
[8]
[9]
Car to Car Communication Consortium
Manifesto, version 1.1, August 2007.
Task Group p, IEEE P802.11p: Wireless Access
in Vehicular Environments (WAVE), draft
standard ed., IEEE Computer Society, 2006.
Timo Kosch, “Technical Concepts and prerequisites of Car to Car communication”, in the
5th European Congress and Exhibition on
Intelligent Transport Systems and Services,
Germany, 2005.
Dedicated Short Range Communications,
Clemson University Vehicular Electronics
Laboratory (CVEL)
Vehicle-to-Vehicle/Vehicle-to-Infrastructure
Control, IoCT-Part4-13VehicleToVehicle-HR.pdf
Nick Jaynes, “Smarter, safer, self-driving: 4
(almost) autonomous cars you can own today”,
January 31, 2013.
Available:
http://www.digitaltrends.com/cars/autonomytoday-fewer-crashes-tomorrow-five-current-carswith-autonomous-tech/#ixzz2eN0N2ZRq
Autonomous
Intelligent Vehicles
BOOK REVIEW
Authors : Hong Cheng
Self-driving cars is an emerging field. Companies like Google, Nissan,
GM and many other are showing their interest in autonomous cars. Dr.
Hong Cheng is considered as a pioneer in this field. He is currently
working as a professor in School of Automation Engineering, and also
a founding director of the Pattern Recognition and Machine
Intelligence Lab at the University of Electronic Science and
Technology of China. His areas of interest include multi-signal
processing, human computer interaction, robotics, computer vision
and machine learning robotics. In his book “Autonomous Intelligent
Vehicles: Theory, Algorithms, and Implementation (Advances in
Computer Vision and Pattern Recognition)”, Prof. Cheng summarizes
his research on Intelligent Vehicles. This book is an essential
reference for researchers in the field of autonomous vehicles. The
broad coverage of all aspects of this research will also appeal to
researchers, professionals and graduate students who are interested
in signal-image processing, pattern recognition, object/obstacle
detection and recognition, vehicle motion control, Intelligent
Transportation Systems and more specifically state-of-the-art of
intelligent vehicles. The field of intelligent vehicles includes a wide
range of technologies ranging from vehicle dynamics to information,
computer vision, hardware, ergonomics and human factors.
Author has written this book with three goals. First goal is to create an
updated reference book of intelligent vehicles and relative
technologies. Second is, presenting object/obstacle detection and
recognition, and introducing vehicle lateral and longitudinal control
algorithms. As a final goal, Prof. Cheng emphasizes on high-level
concepts, and also provides the low-level details of implementation at
the same time. He tries to link theory (algorithms, models, ideas) with
practice (implementations, systems and applied research). This book
is divided in to four parts, as presented below.
The first part presents the framework of autonomous vehicles from A
to Z. Specifically, addressing intelligent vehicles as a set of intelligent
agents integrated with multi-sensor fusion based on distinctive
modules. Author also gives us an insight of different state-of-the-art of
autonomous vehicles, which took part in either the Grand Challenges
or the Urban Challenge supported by the DARPA in the USA. List of
autonomous vehicles discussed by the author include vehicles from
Carnegie Mellon University (Boss), Stanford University (Junior),
Virginia Polytechnic Institute and State University (Odin),
Massachusetts Institute of Technology (Talos), Cornell University
(Skynet), University of Pennsylvania and Lehigh University (Little
Ben), and Oshkosh Truck Corporation (TerraMax). Among these
TerraMax travels slowly because of its big size (27 feet long, 8 feet wide,
8 feet high and weighs around 30000 pounds) making it different from
others.
Second part of the book highlights the importance of environment
perception and modelling. Author describes the benefits of computer
vision systems for road detection and tracking including multiple-sensor
based multiple-object tracking. To this end, author analytically describes
the lane detection methods proposed by the author and his research
team, including lane model, particle filtering, dynamic system model and
algorithms. This part ends with explanation of vehicle detection approach
operating in two phases (i) hypothesis generation (ii) validation. In first
phase, determination of Region of Interest (ROI) in an image is done
using a vanishing point for the road. By analysing the vertical and
horizontal edges in an image, vehicle hypothesis lists for near, middle
and far ROI are generated. Combining these three lists, a hypothesis list
for whole image is attained. In validation phase, support vector machines
and Gabor features are used. Author also proposed an interactive road
situation analysis framework along with its implementation, namely the
multiple-sensor multi-object detection and tracking approach.
Third part of the book highlights Vehicle Localization and Navigation. For
vehicles with autonomous navigation determining their local and global
positions within the environment they are in (which is unstable, dynamic
and extremely unpredictable), is very important and a challenging issue.
In this part of the book, author proposes a method to enhance situation
awareness by dynamically providing a global view of surrounding for
drivers. Rather than using a catadioptric camera, which is used in most of
the existing intelligent vehicles, an omnidirectional vision system
(consisting of multiple cameras) at the top of a vehicle is used to capture
the surrounding of a vehicle. Author explains that this system would be
helpful to obtain high quality images of surroundings.
Finally in fourth part of the book, the author discusses Advanced Vehicle
Motion Control, introducing vehicle lateral and longitudinal motion
control. Author also explains about the proposed Mixed Lateral Control
Strategy in this part. Important issues such as relationship between
motor pulses and the front wheel lean angle for lateral control and first
order lag systems in longitudinal control are covered.
This book serves as a decent handbook for engineers to be informed on
cutting edge technology in the field. It also serves as an extremely
valuable aid to graduate students, who are interested in intelligent
vehicles. It could be a good reference book for an experienced
researcher, who wants to be introduced to specific issues in the field of
intelligent vehicles.
Naveen Boggarapu
Areas of Interest
Embedded Systems,
Linux,
Device Drivers
TechTalk@KPIT, Volume 6, Issue 4, 2013
45
5
46
TechTalk@KPIT, Volume 6, Issue 4, 2013
Inside
Connected Vehicle
About the Author
Mushabbar Hussain
Areas of interest
Embedded Systems,
Security,
Network Communication
TechTalk@KPIT, Volume 6, Issue 4, 2013
47
5
I. Introduction
Connected vehicle is precursor for
autonomous vehicle to communicate in real
time with other vehicles and infrastructure.
Modern day automobiles employ
sophisticated communication mechanisms
connecting multiple embedded computers
over wired and wireless networks. Wireless
connections could be vehicle-vehicle, vehicleinfrastructure or infrastructure-infrastructure.
Hence modern automotive systems are
subject to a much wider range of potential
abuses by cyber criminals/hackers and hence
security plays an important role in automotive
systems. This article mainly focuses on the
security threats in automotive systems, and
the counter measures to safeguard the vehicle
from potential attacks. We first start with a
general introduction of Information Security
followed by detailed discussion on Security in
Automotive Systems.
II. What is Information security?
systems from denial-of-service attacks. What
is Information security breach?
"A data breach is a security incident in which
sensitive, protected or confidential data is
copied, transmitted, viewed, stolen or used by
an unauthorized person." – [6]
Some examples of security breaches include:
•
Malicious attackers gaining unauthorized
access to financial assets such as credit
cards, bank details or personal information
•
Anonymous persons gaining
physical
access to company premises by
compromising the access system of the
company
•
Redirecting customers to unknown sites
hosting similar look and feel to gain access to
their login credentials
•
Hackers gaining access to personal
computers to install malware and viruses
III. What is the Goal of an
information security System?
Information security is all about protecting the
confidential information and its critical
elements (such as software, hardware,
network etc.) from unauthorized access, use
and disclosure. The three key parts of
information security are Confidentiality,
Integrity and Availability. Moreover,
maintaining these three elements is most
important for an organization's well being.
Confidentiality - Confidentiality is about
protecting sensitive data of a company (such
as financial figures, new product info, pricing
etc.) or of an individual (such as credit card
details, bank details etc.) from unauthorized
access or disclosure. Information leak can
lead to financial losses and other serious
implications.
Integrity - Data integrity refers to the
prevention of confidential data from erroneous
modifications, deletion and manipulations.
Integrity involves security measures
employed to ensure consistency, accuracy
and trustworthiness of data over its entire life
cycle. Security measures employed to assure
data integrity include Data encryption, Data
backup, access control, input validation, to
prevent incorrect data entry.
Availability - Availability is about making the
information available to authorized users
when it is needed. This involves protecting
computing systems that store and process the
information from malware and worms,
protecting communication channels,
preventing service disruptions due to power
outages/hardware failures, protection
48
TechTalk@KPIT, Volume 6, Issue 4, 2013
Figure 1: Hacker Attacking a Remote Computer
The primary goal of information security
system is to guarantee safety of information,
prevent theft and loss of IT assets, ensuring
business continuity and reduce business
damage. A secure information system should
have multiple layers of security in place, which
shall include:
A. Physical security
Security measures designed to deny
unauthorized access to equipments and
resources, which includes locks, access
control systems, etc.
B. Logical security
Software measures to safeguard system's
resources that includes user ID,
authentication, biometrics and firewalls.
C. Operations security
This includes operation issues such as
choosing strong passwords, key
management, secure data storage, etc.
D. Communications security (COMSEC)
Communications security involves measures
taken to deny unauthorized interceptors from
accessing and manipulating the data that
flows over a communication channel.
Communication channels can be secured with
the help of techniques such as cryptography,
digital signatures, Firewalls, Antivirus, Web
Security, Email & Web Content Filtering
Solution etc.
E. Network security
Network security involves securing a
computer network infrastructure from
unauthorized access, misuse or damage. A
network administrator implements policies
and procedures to prevent and monitor
unauthorized access of a computer network
from misuse, modifications or damage.
Network security measures include
deployment of firewalls, Anti-virus software,
proxy servers, and Intrusion-detection
systems
IV. Security Threats
Computer systems are vulnerable to many
threats that can inflict significant losses.
External threats can be from hackers, or from
cyber criminals and internal threats can come
from employees, consultants or partners from
inside access to network. Some affect the
confidentiality or integrity of data while others
affect the availability of a system.
Figure 2: Depicting Various Security Threats [7]
V. Security Threats in Embedded
Systems
With advances in technology, modern day
embedded devices are getting more
sophisticated with most of them connected to
wired and wireless networks. Security
becomes a concern due to increased
sensitive data being exchanged on the
networks. Secure embedded systems are
vulnerable to attacks, like physical tampering,
malware, side-channel attacks. For example
in an Access Control System an hacker can
send signals to open the door, gain physical
access through bypassing authentication,
gain access to the secret keys used in
communication, corrupt data by accessing the
file systems, etc. In automotive systems,
hackers can send signals to unlock the car,
tamper flash data, monitor network traffic and
send false messages. These concerns
necessitate the use of security protocols and
other security measures (securing
communication channels using crypto
mechanisms, digital signatures, lockout
mechanism, tamper protection, etc) to protect
the sensitive data from unauthorized spoofing
or manipulation. Here are some examples of
security threats that a system can be
subjected to by an external agent: Denial of
Service Attacks (DOS), Brute-Force: Lack of
Lockout Mechanism, Multiple Persistent
Cross Site Scripting (XSS) Vulnerabilities,
Impersonation, Spoofing also called Man-inthe-middle attack, Phishing also called
webpage spoofing, E-mail address spoofing,
Eavesdropping, and Session Hijacking.
VI. Security in Automotive
Systems
The modern automotive systems are more
sophisticated and more connected than their
traditional counterparts. With cars becoming
more and more connected - to the Internet, to
wireless networks, with each other (car-tocar), and with the infrastructure (car-to
infrastructure) they are becoming more
vulnerable than ever to attackers and hackers.
With more exposure with wireless
communications - such as on-board car
navigation system, a telematics device
connected to the Internet via smart phone
chances of security threats have increased
considerably. Lack of security mechanisms in
current automotive networks makes security a
top priority. Currently, there's nothing to stop
anyone with malicious intent from taking
command of your vehicle. A hacker after
gaining access to the vehicle
network/software could control everything;
from selecting songs to controlling the
acceleration and brakes. The current
automobiles are not designed to detect and
prevent from gaining access to the whole CAN
network, or to reject commands injected by a
corrupt ECU. Therefore, security plays an
important role in automotive systems because
threats might not only cause nuisance and
disclose sensitive data but also directly
endangers the safety of passengers in the car.
Here are some security threats and
vulnerabilities in automotive systems:
l
Inducing forged traffic into the CAN
network by direct access to the bus, through
On-board Diagnostics (OBD) port
TechTalk@KPIT, Volume 6, Issue 4, 2013
49
5
l
Inducing forged traffic into a navigation
system using wireless protocols such as RDS,
TMC
l
Breaking anti-theft systems such as central
locking, immobilizers, passive keyless entry to
gain access to the car
l
Eavesdropping and sending spoofed
messages to the monitoring ECU. Possible
target: Pressure Monitoring System
l
Corruption of rewriteable flash memory
holding updateable program code and
configuration data
VII. Cyber Security Threats in
Autonomous Vehicles
What is Cyber Security?
Cyber security can be defined as the
protection of systems, networks and data in
cyber space.
“Cyber threat is one of the most serious
economic and national security challenges we
face as a nation” and that “America's
economic prosperity in the 21st century will
depend on cyber security.” - President Obama
Before discussing about the security threats in
autonomous vehicles, will start with a brief
description of autonomous cars. An
autonomous car is a driverless car or selfdriving car which is capable of sensing its
environment and navigating without human
input. Autonomous cars are fully connected
vehicles that use a combination of wireless
technologies (such as radar, lidar, GPS) and
advanced sensors (stereo cameras and longand short-range RADAR) for its operation.
These cars are expected to have a permanent
connection to the Internet and to the cloud for
fetching various kinds of information such as
current road situation, weather conditions, or
the parking situation at the destination.
Benefits of autonomous cars include zero
accidents, reduced traffic violations,
transportation for the elderly and
handicapped, productive commute time,
eliminates human errors, improved energy
efficiency.
In order to operate in real time, autonomous
cars may use wireless technologies to
communicate with the grid, the cloud, with
other vehicles (V2V) and to infrastructure
(V2I). An enormous amount of data will
becomes available on the air.
This essentially means that someone –a
hacker, terrorists, the automaker, and
unauthorized parties can have means to
capture data, alter records, instigate attacks
50
TechTalk@KPIT, Volume 6, Issue 4, 2013
on systems and track every movement of
vehicle. The hackers can gain access to the
vehicle sensors that control airbags, breaking
systems, door lock operations and virtually
control/disable the car. They could provide
false information to drivers, use denial-ofservice attacks to bring down the in-vehicle
network, illicitly reprogram the ECUs with a
malware and even can download incorrect
navigation maps to mislead the driver.
Therefore, system security will undoubtedly
become a paramount issue which the
automakers need to address before putting
the autonomous cars on the road. The security
system inside autonomous vehicle shall
ensure: (I) Technology in a self-driven car
works 100% of the time without compromising
on the safety-critical functionality, (II) Internal
as well as external communication interfaces
are properly secured, (III) Secure software
download, (IV) Enable secure access for
diagnosis purposes, (V) Electronic
immobilizer, (VI) Software and hardware
integrity, (VI) Protection from theft and forgery.
VIII. Automotive System Security –
Challenges
Challenges in producing secure code arise
from the nature of device that runs the
software:
l
Automotive embedded systems are
resource-constrained - have lesser capacity to
compensate for CPU or memory related
attacks. As a result, they are easily
susceptible to denial of service attacks.
l
They have less processing power - Their
performance can be impacted by running
computationally intensive cryptographic
algorithms. Hence embedded software does
not include secure networking protocols as
compared to desktop counterparts.
l
Firmware of an embedded device can be
changed/replaced with a malicious
application.
l
Most of the automotive systems do not run
on an operating system platform (such as
VxWorks, Linux), this inhibits developers from
installing and using readily available, off-theshelf security software's such as OpenSSL,
SSH, HTTPS, etc.
IX. Threat Modeling in Automotive
Systems
Threat modeling is a structured approach that
enables a security expert to identify, quantify,
and address the security risks associated with
a system. The inclusion of threat modeling
during the early phases of SDLC can help to
ensure that applications get developed with
security built-in from the very beginning.
Modern threat modeling mechanisms looks at
a system from a potential attacker's
perspective, as opposed to a defender's
viewpoint.
Figure 3: Threat Model for a typical Automotive System
Threat modeling helps in depiction of:
l
The system's attack surface (entry points)
Potential Threats that can attack the
l
system
l
The assets (such as software, hardware,
database etc) that a threat(s) can compromise
Basic principles of Threat Modeling and
Counter Measures: (I) Identify assets, identify
and rank the threats and depict them, (II)
Protect data as it is transported - employ
standard encryption mechanisms & secret
keys, (III) Protect data as it is stored &
accessed – encrypt data before storing, (IV)
Restrict unauthorized access with
Authentication and Authorization, (V) Protect
against playback attacks (a playback attack or
replay attack is a form of network attack in
which a valid data is transmitted repeatedly
with a malicious intent, (VI) Provide ability to
recover from compromised state, (VII) Ensure
software authenticity and integrity of the
received data with cryptographic digital
signatures. All the above can be achieved
through a combination of hardware & software
features, physical controls, encryption
mechanisms, operating procedures, and
various combinations of these.
Table 1: Information and other automotive assets that should be protected
Objects that should be protected
Description
Operation of "Basic control functions"
Coherence and availability of basic control functions (such as chassis,
body and engine) execution environment
Information unique to the vehicle
Information which is unique to the car body (vehicle ID, device ID, etc.),
authentication code, and accumulated information such as running
history and operation history
Vehicle status information
Data representing the vehicle's status such as location, running speed,
and destination
User information
Personal information, authentication information, usage history and
operation history of the user (driver/passengers)
Software
Software which is related to vehicles' "Basic control functions”
Contents
Data for applications for video, music, map, etc.
Configuration information
Vehicle configuration information needed for the behavior of hardware
and software
X. Conclusion
With modern automotive systems getting connected to
the networks, the security is no more an optional feature
these days. In the last decade, the automobile industry
has been focusing mainly on improving safety aspects
of the car and this decade the focus would be to build
more secure and safe vehicles. The security threats can
even endanger the life of the passengers in the vehicle.
Hence, building secure products is an absolute
necessity.
Currently, many organizations are involved in doing
research related to security in connected cars. Most of
the research has focused on identifying the security
problems and less towards presenting solutions. The
greatest challenge in a connected car would be to adapt
to the security solutions under the constraints of very
limited hardware, software and power resources, and
most importantly without compromising on the safety
requirements.
References
[1] http://securityinspection.com/wpcontent/uploads/2011/10/where_are_the_threats1.gif
[2] Digital Signature.
Available: http://en.wikipedia.org/wiki/Digital_signature
[3] Threat modeling, Available: https://www.owasp.org/index.
php/Application_Threat_Modeling
[4] Nachiketh Potlapally, “Secure Embedded System Design”,
January, 2008.
[5] Self-driving cars: The next revolution. Available:
https://www.
kpmg.com/US/en/IssuesAndInsights/ArticlesPublications
/Documents/self-driving-cars-next-revolution.pdf
[6] http://en.wikipedia.org/wiki/Data_breach
[7] http://www.thinkinfosecurity.com/uploads/7/4/3/2/7432545/
6902470.gif
TechTalk@KPIT, Volume 6, Issue 4, 2013
51
5
52
TechTalk@KPIT, Volume 6, Issue 4, 2013
Gazing Through a
Crystal Ball
About the Authors
Krishnan Kutty
Areas of interest
Computer Vision,
Image Processing,
Pattern Recognition
Charudatta B. Sinnarkar
Areas of interest
Innovation in alternate fuels
for IC engines,
Software development
in Oracle technologies
TechTalk@KPIT, Volume 6, Issue 4, 2013
53
5
I. Introduction
In the past century mainly two types of
industries enjoyed glamor. The first one was
cinema, which has retained its numero Uno
Position even now, and the second one was
automotive. Automotive industry thrives on
developments in other disciplines. Today there
is a major development taking place in
electronics, computers, embedded systems,
and software applications. The automotive
industry is taking a big leap to embrace
technological advancements in web
technology, telecommunications, faster
processing power, more stable and reliable
image processing applications. It is on the brink
of a new technological revolution simply put as
“Self Driving Vehicles”.
It is well over a century now that we have
invented, pioneered, and driven cars at our will.
However, as dramatic as it may sound, the fact
is that for the majority today, the car has
transformed from a symbol of power to nothing
but a mere contraption. To top it, driving today
brings out the worst in drivers; and is beyond a
doubt, one of the major causes of human
fatality. Efforts are on to develop intelligent
traffic controls, prevent accidents, incorporate
sophisticated sensors and sensing
mechanisms for the car to 'know' its
surroundings, assist the driver, and (in some
cases) control the car. However, as long as
there is a human behind the wheel,
assist/control systems cannot completely
eliminate accidents that are caused due to
driver fatigue, recklessness, drowsiness etc.
The irony lies in the fact that the cars that we
created can be driven at a stretch for a much
longer time interval than typical biological
human endurance levels. Having said this, the
future does look promising.
We have now provided intelligent sensing and
high processing capability to the car. In addition,
we have also 'wired' the brain of the car so that it
can understand what the sensed data means,
how to interpret it, and how to analyze the data
in order to 'know' its surroundings. With this
powerful combination of sensors, processors
and algorithms, we have been successful to
evolve the car with 'contraption' into an
'autonomous' car.
below and elaborated.
A. Safety of Passengers
Only if proven beyond a doubt that self-driving
cars are very reliable, and they actually have the
potential to reduce the number of accidents and
economic losses. There is a possibility of
technical issues, software glitches, and other
issues that need to be closely monitored. In
addition, the car needs to follow all traffic rules
and oblige to all critical safety constraints. We
remember someone jovially saying that if a
traffic cop gets hold of a self-driving car that has
just broken a signal, who would he penalize –
the car, the passengers, the OEM, or the
software provider. Jovial as it may sound, but
the fact of the matter remains that new
legislation, rules, regulations and standards
need to come in place. It is also debatable if the
strictly 'algorithm' following self-driving car can
actually (and reliably) drive in environments that
are actually chaotic with the current
conventional cars with human beings as
drivers. In addition, one more aspect of selfdriving cars is the absence of human judgment.
Though powerful computers in the cars do
faster processing than human beings, it still
lacks the 'human feel' element in it. As an
example, what would a self-driving car do if a
child suddenly comes onto the road without
prior warning? Would it decide to move itself
into a different lane and risk life of the
passengers in the car? What if the classification
was wrongly done by the car – wherein it was in
fact an animal, and wrongly interpreted as a
human child? This would not just be a
technological issue but it would also be a
legal/legislative issue when we consider mass
usage of self-driving cars on roads.
The Technological S curve that depicts failure
distance for autonomous vehicle technology is
as shown in the Fig. 1. As per David Stavens,
who obtained his PhD under the guidance of
Prof. Sebastian Thrun – the director of
Stanford's AI laboratory, if the mean failure
distance for a self-driving car reaches the order
of a million miles or more, it would be more
commercially viable and would enable it poised
for use in the mainstream.
II. Future of Autonomous Vehicles
The concept of autonomous driving has caught
attention from different strata of consumers and
technologists alike. The wide spread coverage
of Google's fleet of autonomously driven Toyota
Prius vehicles have now convinced people
beyond a doubt that this technology is not too far
away from voluminous acceptance. There are
still noteworthy limits to the state of the art as far
as autonomous (self-driving) vehicles are
concerned. Some of the areas that need to be
looked into at a much deeper level are stated
54
TechTalk@KPIT, Volume 6, Issue 4, 2013
Figure 1: Mean failure distance for
autonomous vehicle technology [6]
Thus, from a passenger acceptance and overall
safety perspective, it can be summarized that,
as of today, autonomous cars are taken with a
pinch of fascination and skepticism. However,
with the long strides that mankind is taking
towards making autonomous vehicles a reality,
it is believed that the trust factor would
exponentially improve and these self-driving
cars would start getting wider acceptance.
B. Maturity of technologies to enable selfdriving cars
The two major technologies that hold promise
for self-driving cars are connected vehicle
based solutions, and sensor based solutions.
Connected vehicle based solutions include
Dedicated Short Range Communication
(DSRC), GPS based solutions, Cellular based
solutions, etc. These solutions connect the car
with other cars on the road and with
infrastructure. This is required in order to
transmit different parameters of the car to
remote location for diagnostics etc. Car to car
communication using DSRC or other protocols
can provide collective intelligence to a fleet of
cars in its vicinity. This is achieved by virtue of
sharing data and by performing intelligent and
fast analytics on the data. However, there are
some challenges with these technologies.
Connectivity, data loss, credibility of data, data
explosion, scalability are some of the factors
that still need attention, time and research to
ripen.
Sensor based solutions, like lidar, radar,
camera etc., on the other hand, pose a different
set of challenges. Sensors in a self-driving car
provide valuable data regarding the
surroundings of the car at any given point of
time. Based on this data, the computer in the car
calculates its next course of action. However, in
adverse driving/weather conditions, the sensor
data may not be reliable. In addition, there are
chances of sensor failure, saturation, ageing
etc. Thus, there is an eminent need for research
in smarter, faster and more reliable sensors.
More so, smart algorithms need to be
developed that can look into the sensor data to
detect some failure has happened (diagnosis).
For the self-driving car of the future, algorithms
must also be in place which can look at current
data and perform a predictive analysis of when
a possible failure could happen, thereby
reducing chances of inadvertent stoppage or
slowdown.
Figure 2: Technologies for self-driving cars
The future self-driving car is bound to possess
optimal and intelligent combination of both of
these solutions: Connected vehicle based and
Sensor based. With strong sensory perception
and better communication, self-driving cars will
become more reliable.
C. Infrastructure development
It goes without saying that the autonomous car
should be able to maneuver on roads by
properly planning their trajectories. This
planning is done predominantly based on the
sensors that are on-board the vehicle.
Moreover, since these sensors have limited
range, the long-range data may not be accurate
enough for early predictions. The report from
the DARPA on Urban Challenge identifies the
need for autonomous vehicles to have access
to each other's information. This is a key lesson
that was learned from the contest [1].
In a recent publication from MIT's Computer
Science and Artificial Intelligence Laboratory
(CSAIL), Swarun et al. have depicted the need
of a cloud-assisted system for autonomous
driving [2]. They proposed a system called
'Carcel', where the cloud obtains information
from the vehicles and the road-side
infrastructure. Moreover, the cloud also records
the information and trajectory of all the
autonomous vehicles. Intelligent algorithms
that run on the cloud analyses the information
and sends appropriate information to individual
vehicles regarding obstacles, blind spots etc.
This, in turn, helps the vehicle to better estimate
its trajectory in spite of the fact that some of the
obstacles are out of the line of sight of its onboard sensors.
Figure 3: Cloud for autonomous vehicles as described in 'Carcel' [2]
Some of the other infrastructure systems that
are available and widely used for traffic
monitoring, control and management are as
given [3]: (i) Video Monitoring Systems,
(ii) Video/cameras (for traffic monitoring),
(iii) Speed cameras, (iv) Traffic detection (e.g.
ILD, laser/radar/ microwave), (v) National/
regional traffic control/information center, (vi)
U r b a n Tr a f f i c C o n t r o l s y s t e m s ( f o r
controlling the traffic lights), (vii) Video/ cameras
(for number plate reading), (viii) Infrastructure to
vehicle (I2V) communication (for toll collection),
(ix) Inference methods, (x) Cooperative
information systems, (xi) Vehicle to
Vehicle(V2V) and Vehicle to Infrastructure (V2I)
communication.
TechTalk@KPIT, Volume 6, Issue 4, 2013
55
5
In order to facilitate a rugged autonomous
vehicle based road transport system, all these
infrastructure systems need to be upgraded,
made sophisticated and enabled to work in real
time. This is a challenge in itself.
D. Information Security
The future cars are bound to have lot more
electronics, functionalities and features
bundled into them. Communication within
subsystems of a car and between cars over a
multitude of channels is, therefore, necessary.
Safety and security with respect to information
exchange is, therefore, of utmost importance. In
addition, given the increase in the number of
cyber-attacks, phishing etc., the area of
information security is gaining increasing
importance. Moreover, with the autonomous
car, this directly manifests into vehicle and
pedestrian safety on the road. Given the fact
that the computer in an autonomous car takes
inputs from multiple sensors onboard and from
other data on the internet for path planning,
maneuvers etc., sanctity of this data is
sacrosanct.
The need is two-fold. One is to ensure that there
are reliable and stricter protocols for data
transfer, storage etc. which need to be
developed. The other is to ensure that there are
smart algorithms which can detect instances of
any tampering with the data or control
command and take timely action in order to
avoid any untoward incident. In addition, the
modules in the car should be adaptive enough
to update themselves with upgraded version of
software etc. automatically when plugged on to
the internet.
E. Legislation and Regulations
As of today, there is no clarity on the responsible
entity for a failure in automation process. The
driver holds responsibility for any mishap that
happens because of his/her inattentiveness.
The OEM or Tier-I is responsible for any failure
with respect to a vehicle's sub-system or any
other specific component. However, it is unclear
about who holds responsibility in the
'automation'. In order to get rid of this problem,
lot of emphasis is already being given to
multiple aspects viz., failsafe mechanisms (by
virtue of redundancy); vehicle health checks
(both diagnostics and prognostics); smarter
HMI interfaces (to intuitively warn in case of
failure of system(s)) etc. In order to ensure that
the system is drivable, the four basic aspects
viz. reliability, security, accuracy and credibility
should be carefully studied. There are lots of
standards related to automotive grade software
development. Two important standards worth
mentioning are ISO26262 and MISRA.
The NHTSA (National Highway Traffic Safety
Administration) has defined different levels of
automation for cars ranging from 0 to 4. For
instance, the cars that Google is testing are at
56
TechTalk@KPIT, Volume 6, Issue 4, 2013
level 3, since a driver still needs to be present to
take control if necessary. “Level 3 is truly in the
testing phase and these guidelines are ensuring
that the testing is done so it's safe for the driver
and safe for everyone else on the road,” as
quoted by David Friedman, deputy
administrator at the NHTSA. In the same lines,
one can argue that level 2 cars are already on
the road now commercially. For instance, a high
end car featuring two or more ADAS systems
such as Adaptive Cruise control and lane
keeping by itself can be considered at level 2.
Specifically in the US, as of end of 2012, three
states have enacted laws pertaining to
autonomous vehicles. These states are
Nevada, Florida and California. Nevada was the
first jurisdiction in the world to legally operate
autonomous vehicles on public roads. In 2013,
the government of UK permitted testing of
autonomous cars on public roads. It is just a
matter of time before more and more
governments would accept testing and thereby
plying of autonomous cars on their public roads.
III. Conclusion
As population increases, it is obvious that the
numbers of cars hitting the road will also grow.
With limitations on infrastructure and proper
transportation, this poses a potential challenge.
The mobility that people enjoy today is taken for
granted by many and they barely realize that
transportation forms the basis of our civilization.
There is undoubtedly a dire need for safer,
efficient and more balanced mode of transport.
Autonomous vehicles and the associated ecosystem in its entirety provide an unparalleled
solution to the problem of transportation in the
future. In addition to safety, the socio-economic
impact of autonomous vehicles which has got to
do with fuel economy, time savings, vehicle
maintenance etc. is also a very strong factor
that boosts the prospects of autonomous
vehicles being widely accepted. As more and
more vehicles become autonomous, its effect
on our day to day life will start being evident.
With the amount of research and development
happening in this field, the day is not far.
It was aptly phrased – “The revolution, when it
comes, will be engendered by the advent of
autonomous or “self-driving” vehicles. And the
timing may be sooner than you think…”
References
1. M. Campbell, M. Egerstedt, J. P. How, and R. M. Murray, “Autonomous driving in
urban environments: approaches, lessons and challenges”, Philosophical Transactions of
the Royal Society Series A, 368:4649–4672, 2010.
2. Swaran Kumar, ShyamnathGollakota, and Dina Katabi, “A Cloud-Assisted Design for
Autonomous Driving”, MCC12, August 17, 2012.
3. Margriet et al, “Definition of necessary vehicle and infrastructure systems for
Automated Driving”, Study Report to the European Commission, BRUSSELS, Belgium,
July 29, 2011.
4. http://www.techpageone.com/technology/u-s-to-regulate-autonomous-cars/
5. “Self-Driving Cars: The next Revolution”, KMPG report, 2012.
6. Matthew Moore, Beverly Lu, “Autonomous Vehicles for Personal Transport: A
Technology Assessment”, article for “Management of Technology”, submitted to Caltech
University, 2011.
About KPIT Technologies Limited
KPIT partners with global automotive and semiconductor corporations in
bringing products faster to their target markets. We help customers globalize
their process and systems efficiently through a unique blend of domain-intensive
technology and process expertise. As leaders in our space, we are singularly
focused on co-creating technology products and solutions to help our customers
become efficient, integrated, and innovative manufacturing enterprises. We have
filed for 50 patents in the areas of Automotive Technology, Hybrid Vehicles, High
Performance Computing, Driver Safety Systems, Battery Management System,
and Semiconductors.
About CREST
Center for Research in Engineering Sciences and Technology (CREST) is focused
on innovation, technology, research and development in emerging technologies.
Our vision is to build KPIT as the global leader in selected technologies of interest,
to enable free exchange of ideas, and to create an atmosphere of innovation
throughout the company. CREST is recognized and approved
R & D Center by the Dept. of Scientific and Industrial Research, India. This journal
is an endeavor to bring you the latest in scientific research and technology.
Invitation to Write Articles
Our forthcoming issue to be released in April 2014 will be based on
“Powertrain”. We invite you to share your knowledge by contributing to this
journal.
Format of the Articles
Your original articles should be based on the central theme of “Powertrain”.
The length of the articles should be between 1200 to 1500 words. Appropriate
references should be included at the end of the articles. All the pictures should be
from public domain and of high resolution. Please include a brief write-up and a
photograph of yourself along with the article. The last date for submission of
articles for the next issue is November 30, 2013.
To send in your contributions, please write to [email protected] .
To know more about us, log on to www.kpit.com .
Innovation for customers
TechTalk@KPIT Oct - Dec 2013
Sebastian Thrun
y
“The potential here is enormous.
Autonomous vehicles will be as important
as the Internet.”
Born : 1967
35 & 36, Rajiv Gandhi Infotech Park,
Phase - 1, MIDC, Hinjawadi, Pune - 411 057, India.
For private circulation only.