On-Road Vehicle Recognition Using the Symmetry Property and

ARTICLE
International Journal of Advanced Robotic Systems
On-Road Vehicle Recognition Using the
Symmetry Property and Snake Models
Regular Paper
Shumin Liu1,2, Yingping Huang1,* and Renjie Zhang1
1 School of Optical-Electrical and Computer Engineering, University of Shanghai Science & Technology, Shanghai, China
2 Jiangxi University of Science and Technology, Nanchang, China
* Corresponding author E-mail: [email protected]
Received: Jul 26, 2013; Accepted: Nov 06, 2013
DOI: 10.5772/57382
© 2013 Yingping et al.; licensee InTech. This is an open access article distributed under the terms of the Creative
Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,
distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract Vehicle recognition is a fundamental task for
advanced driver assistance systems and contributes to
the avoidance of collisions with other vehicles. In
recent years, numerous approaches using monocular
image analysis have been reported for vehicle
detection. These approaches are primarily applied in
motorway scenarios and may not be suitable for
complex urban traffic with a diversity of obstacles and
a clustered background. In this paper, stereovision is
firstly used to segment potential vehicles from the
traffic background. Given that the contour curve is the
most straightforward cue for object recognition, we
present here a novel method for complete contour
curve extraction using symmetry properties and a
snake model. Finally, two shape factors, including the
aspect ratio and the area ratio calculated from the
contour curve, are used to judge whether the object
detected is a vehicle or not. The approach presented
here was tested with substantial urban traffic images
and the experimental results demonstrated that the
correction rate for vehicle recognition reaches 93%.
Keywords Contour Extraction, Vehicle Recognition,
Symmetry, Snake Model, Stereovision
www.intechopen.com
1. Introduction
Vehicle recognition is a fundamental task for on-board
driver assistance systems, enabling cars to be aware of
their driving environment and warning drivers of
potential hazards. Vehicle recognition has many
applications, including platooning, stop and go,
autonomous navigation and parking optimization, etc. In
the past decade, extensive research has been carried out
in the field of vehicle detection using on-board imaging
systems together with image processing techniques.
Vision-based vehicle recognition methods normally
follow two basic steps [1]: 1) hypothesis generation (HG),
which hypothesizes the regions in an image where
possible vehicles are; and 2) hypothesis verification (HV),
which verifies the correctness of the hypothesis in step 1)
by using certain algorithms. In general, HG is based on
some simple features. There are three kinds of methods
for locating potential vehicles [1, 2]: 1) knowledge-based
methods; 2) motion-based methods; and 3) stereovisionbased methods. Knowledge-based methods make use of
prior knowledge to locate candidate vehicles in an image.
Such prior knowledge includes symmetry [3], colour [4],
shadow [5], geometrical features [6] and texture [7], etc.
Int. j.Huang
adv. robot.
syst., 2013,
10, 407:2013
Shumin Liu, Yingping
and Renjie
Zhang:Vol.
On-Road
Vehicle
Recognition Using the Symmetry Property and Snake Models
1
As one of the main signatures of man-made objects,
symmetry has been used for object detection and
recognition in computer vision [8]. In [9, 10], symmetry
properties were used to filter out homogeneous areas and
segment potential vehicles where the images of vehicles
observed from rear or frontal views are in general
symmetrical in the horizontal and vertical directions. In
[3], the detection process uses shadow and symmetry
features to generate vehicle hypotheses. In [11], vehicle
symmetrical contours and license plate positions were
used to detect vehicles. However, these studies were only
suitable for detecting vehicles in the same lane whose rear
view image strictly conforms to the symmetry property.
Moreover, symmetry estimations are sensitive to noise for
homogeneous areas. In general, knowledge-based
methods are effective in relatively simple environments
but cannot work in complex environments because the
prior knowledge is susceptible to illumination, image
quality and the complexity of background. Motion-based
methods commonly use optical flow to estimate the
relative motion of vehicles. The disadvantages of this
method are time-consuming and cannot detect static
obstacles. Stereovision-based methods have been
employed extensively for HG. Kunsoo Huh [12] proposed
a vehicle detection method using stereovision. This
method can effectively and accurately locate a potential
vehicle, but is unable to determine whether it is a vehicle
or not.
Approaches using HV are classified into two categories: 1)
template-based methods; and 2) appearance-based
methods. Template-based methods use predefined
patterns of the vehicle class and perform correlations
between the image and the template. The templates are
commonly proposed based on the vertical symmetry of
the frontal/rear view of the vehicle. Since a vehicle’s
appearance changes with the viewing angle, it is difficult
to build a vehicle class template. Moreover, the templates
are sensitive to image noise. Appearance-based methods
learn the characteristics of the vehicle class from a set of
training images that should cover any variability in a
vehicle’s appearance. These methods usually contain the
following steps: 1) train a classifier, such as a support
vector machine (SVM) [13], a neural network [14] or the
Adaboost method [15], with substantial data 2) extract
features from the object, such as Haar [16] features or
HOG [17] features, and then input these features into the
classifier to identify these objects. In general, appearancebased methods are more accurate than template-based
methods; however, they are more costly due to the need
for classifier training.
In this paper, we propose a novel vehicle recognition
approach that identifies a vehicle based on the object
geometry contour shape. The approach integrates
stereovision with a novel object contour extraction
2
Int. j. adv. robot. syst., 2013, Vol. 10, 407:2013
algorithm. First, we locate potential vehicles by the
stereovision-based method described in our previous
work [18� 19]. Second, an object contour curve is extracted
using symmetry properties and refined by the snake
model technique. Afterwards, two factors including the
object aspect ratio and the area ratio calculated from the
contour curve are used to identify vehicles.
The contributions of this work can be summarized as
follows: 1) using the snake model together with the object
symmetry property for the extraction of a complete
obstacle’s contour curve is a novelty. As a contour
extraction technique, the snake model has been widely
used in many image analysis applications with relatively
simple background information. However, no work has
been reported on using the snake model for complex road
scene image analysis; 2� integrating stereovision with the
snake model overcomes a key limitation of the snake
model in that the snake model is sensitive to the initial
position of the contour curve. Stereovision can work
perfectly well in locating the obstacle region; 3� the
approach developed here not only works for direct
rear/front view vehicles but also for vehicles observed
from the side view, which is a significant advance in
comparison with existing studies; 4) a contour curve is
the most straightforward cue for obstacle recognition.
Therefore, a contour extraction technique suitable for
clustered and dynamic traffic image analysis is crucial for
obstacle detection and classification in advanced driver
assistance systems. The approach proposed here can be
extended for multi-class object recognition tasks; 5) the
approach proposed here divides obstacle recognition into
two steps - namely object detection and classification which differ from most existing object recognition
applications.
2. Approaches
2.1 Overall Structure
The schematic diagram of the proposed system is
illustrated in Fig. 1. Firstly, a stereovision rig
corroborated with an edge-indexed stereo matching
algorithm [18] is employed to produce an edge-based
disparity map. The regions of interest (ROIs) in the left
image are created according to disparities between the
objects and they comprise the potential vehicle objects in
the scene. Secondly, symmetry properties are applied to
eliminate the non-boundary points so that an initial object
contour can be extracted. Subject to noise, the initial
contour obtained can be discontinuous and incomplete,
containing some noise points within the vehicle body.
Therefore, the initial contour is refined with the snake
model so that a closed and complete object contour curve
is generated. Thirdly, two parameters including an object
aspect ratio and an area ratio are calculated from the
www.intechopen.com
contour curve and accordingly the objects within the
ROIs are classified into two classes - vehicles or other
objects.
However, the symmetry of an object also depends on the
angle of viewing. In this application, only an exact rear
view or front view of a vehicle satisfies the symmetry
relation.
Fig. 2(a) is the contour curve for a vehicle with a direct
rear-view while Fig. 2(b) shows the same vehicle with a
viewing angle, where the blue curve is the contour for
the vehicle with a direct rear-view and the yellow
dashed-curve is the contour with a viewing angle. It can
be seen from Fig. 2(a) that the contour curve is
symmetrical about the axis X-X’. However, it is no
longer symmetrical about the axis X-X’ for the yellow
contour curve in Fig. 2(b) (i.e., points a and b' no longer
meet equation: x'l+xr=2xa, where xa is the abscissa of the
symmetry axis X-X’, and xl and x'r are the abscissa for
points a and b' respectively). Instead, these points meet
the following relation:
xli + xri ± ε i = xa
where
Figure 1. The schematic diagram of the vehicle recognition
system
2.2 Object Segmentation based on Stereovision
The stereovision technique is an imitation of the
physiology of the human eye. The stereovision technique
has been successfully used for object segmentation, 3-D
reconstruction and target tracking, etc. For a vehicle
recognition system, one of the most important and
difficult processes is to segment the potential vehicle from
the complex traffic scene. In this paper, we segment the
objects by the following steps: 1) generate a vertical edge
image for the left image using a Sobel operator; 2)
construct the disparity map using the edge-indexed
stereo matching method; 3) construct the depth image
and segment the objects according to their distance; 4)
reconstruct the segmented objects into the left edge image
and remove the noise points based on a disparity
threshold. The details of the approach can be found in [18,
19]. The edge-indexed stereo-matching loses some image
information, but it greatly reduces the computational cost
in comparison with a dense stereo-matching algorithm.
Therefore, it is suitable for our application in locating a
region of a potential vehicle subject to a real-time running
requirement.
2.3 Contour Extraction using Symmetry Properties
2.3.1 Generalized Symmetry Properties
Symmetry is an inherent properties of most man-made
objects, such as vehicles, houses, windows, tables, etc.
They are normally symmetrical about the vertical axis.
www.intechopen.com
(1)
ε i is a value representing the degree of asymmetry
between xli and xri about x a . The subscripts l, r and i
represent left, right and the ith point, respectively. By
defining a threshold ε th , the points that meet an
inequality
ε i ≤ ε th
can be regarded as a generalized
symmetrical point pair. That is, equation (1) can be
rewritten to represent a generalized symmetrical point
pair as follows:
xli + xri − xa ≤ ε th
(a)
(2)
(b)
Figure 2. Definition of generalized symmetry
The threshold
ε th reflects the degree of asymmetry
between the left and right points of a point pair. In Fig.
2(a), the vehicle contour is almost strictly symmetrical
about the axis X-X'; therefore, ε th can be set as zero. In
Fig. 2(b), the yellow contour is not strictly symmetrical
any more due to the side-view effect - this is the reason
why we need extend symmetry to generalized symmetry.
Actually, we hope that ε th is as small as possible; thus, we
try to move the axis X-X’ to the axis Y-Y’ to achieve a
relatively-strict symmetry.
Shumin Liu, Yingping Huang and Renjie Zhang: On-Road Vehicle
Recognition Using the Symmetry Property and Snake Models
3
2.3.2 Initial Contour Image Extraction
based on Symmetry Properties
tracking and segmentation [22]. It has become a main
method for contour extraction.
The stereovision approach locates a region containing
potential vehicles. The next step is to extract the contour
of a vehicle by using its symmetry properties. When
extracting an initial contour image, we are only
concerned with the vertical edges of the objects
segmented by the stereovision approach. The principle
of extracting the initial contour of an object is based on
the fact that the edge point pairs on the contour satisfy
the inequalities (2). Consequently, other edge points
inside or outside the vehicle body are regarded as noise
points.
A snake curve can be construed as a number of control
points that are linked together to form a contour and
deform under the constraining forces. The deformation
is carried out by minimizing an energy function so that
the contour will move from an initial position to the true
contours of objects. A classic snake model is defined as a
curve C(s) = [x(s, t), y(s, t)], s ∈ [0, 1], where [x, y] is a
point in the image that moves through the spatial
domain of the image to minimize the energy function
E(C):
1
The initial contour image extraction using object
symmetry properties is as follows:
Step 1. Estimating the initial symmetry axis: within the
vertical edge image, the initial symmetry axis Ai, is
determined as a vertical line whose abscissa is the mean
value of the abscissa xm of all the edge points.
Step 2. Estimating the real symmetry axes: calculate
σ left , σ right , of the abscissa
the mean square deviations
E(C) = 
0
[
]
1
2
2
α C′(s, t ) + β C′′(s, t ) + Eextds
2
(3)
where α and β are weighting parameters that control
the snake's tension and rigidity, respectively, and C’(s)
and C’’(s) denote the first and second derivatives of
C(s) with respect to s. The external energy Eext is
derived from the image so that it takes on its smaller
values at the features of interest, such as boundaries. A
snake that minimizes E(C) must satisfy the dynamic
equation:
σ left
of both side edge points divided by Ai, which reflect
the dispersion of the edge points. If σ left > σ right , this
means that the real symmetry axis should be on the
left-hand side of Ai, otherwise, it would be on the
right-hand side.
Step 3. Extracting the initial contour using symmetry
properties: move Ai towards the left or the right,
horizontally within a range of ∆a, to search for the true
symmetry axis. The moving range ∆a is determined
according to the difference of σ left and σ right . Each time,
move Ai with a small step size ∆s (e.g., three pixels) and
search for symmetry points according to equation (2)
about the symmetry axis and the search number n = ∆a/∆s.
The true symmetry axis is the one that has the maximum
symmetry point pairs. Remove any asymmetrical point
pairs within the vertical edge image and get the initial
contour of the vehicle.
Step 4. Repairing the initial contour: connect the top and
bottom horizontal symmetry point pairs.
2.4 Contour Refinement using the Snake Model
Snake models (also called ‘active contour models’) were
proposed by Kass et al. [20] as a segmentation scheme
and have been successfully applied in a variety of
problems in computer vision and image analysis, such as
edge and subjective contour detection [21], motion
4
Int. j. adv. robot. syst., 2013, Vol. 10, 407:2013
Ct ( s, t ) = αC ′′( s ) − β C ′′′′( s ) − ∇Eext = 0
(4)
The curve keeps moving towards the contour of the
object and forms a closed parametric boundary curve
until the motion stops. However, there are several
shortcomings of this model: 1) it is in general
sensitive to the initial position of the contour curve
(i.e., the initial contour must be close to the true
boundary or else it will likely converge on a wrong
result); 2) the running speed of it is very slow because
the scope of the external force field Fext is limited; and
3) it does not have the ability to repair the boundaries
with gaps.
In this work, the first shortcoming is overcome by using
stereovision, since the stereovision approach generates a
ROI, which minimizes the object region. The initial
position of the contour curve is set in the ROI, which is
very close to the actual position of the object under
detection. In order to overcome the other two
shortcomings, we use an improved snake model called
the ‘distance potential model’ proposed by Cohen [23].
The distance potential model defines Eext as a function of
distance. The potential force function can be expressed
as:
F ( x, y ) dis tan ce = f (d ( x, y ))
(5)
www.intechopen.com
where d(x, y) is the Chamfer distance or Euclidean
distance between the image point (x, y) and the nearest
boundary. In our work, f(d(x, y)) is defined as the
negative gradient of d(x, y). This improved snake model
effectively expands the range of external force and
improves the ability to extract a contour for an edge with
gaps.
2.5.2 The Area Ratio Ra of the Contour Curve
Normally, a vehicle has a significantly smaller aspect
ratio than a pedestrian. However, there are some
exceptional cases. Fig. 4 shows one of these cases, where a
pedestrian is swinging his/her arms.
2.5 Vehicle Recognition based on Object Contour
After the object contour is extracted, the aspect ratio Rth
and the area ratio Ra are calculated from the contour and
used for vehicle recognition.
2.5.1 The Aspect Ratio Rhw of the Contour Curve
A vehicle is a rigid object and has a fixed aspect ratio (i.e.,
height/width ratio), which is significantly different from
other road obstacles such as pedestrians. Normally, a
vehicle has a significantly smaller aspect ratio than a
pedestrian. This property can be used for recognizing a
vehicle. However, the aspect ratio of a vehicle may vary
with the angle of view. Fig. 3 shows the relative position
between a vehicle-mounted camera and observed objects
in a traffic scene. For two vehicles with the same size in
Pos1 and Pos2 (the same distance), the width observed by
the camera is different. The vehicle width W' in Pos2 is
L.tanβ + W, where W is the width of the vehicle in Pos1.
The aspect ratio Rhw of a vehicle in different positions
satisfies the following inequality:
H
H
≤ Rhw ≤
W
L. tan β + W
(6)
where 2β is the horizontal viewing angle of the camera
and H is the vehicle height. Actually, Rhw also varies with
the vehicle class.
Figure 4. The ratio of area for the contour
In this case, the contour curve of the pedestrian has a
smaller Rhw and may be wrongly regarded as a vehicle
according to Rhw. It can be seen from Fig. 4 that the actual
area of the pedestrian contour curve is much less than the
nominal area H×W - that is, the ratio of the actual area of
the contour to H×W of a pedestrian is significantly
smaller than that of a vehicle. The ratio of the actual area
of the contour to H×W is called the ‘area ratio’ and can be
expressed as:
Ra =
 f ( x, y )ds
H ⋅W
(7)
where the integral (i.e., the numerator) is the actual area
of the contour. The significant difference of the area ratio
between a vehicle and a pedestrian is to be used as a
further clue in recognizing a vehicle.
3. Experiments and Results
3.1 Stereovision-based Object Segmentation
(a)
(b)
(c)
Figure 3. Vehicle-mounted camera observation geometry and the
contour curve of vehicles: (a) the aerial view of the pavement;
(b) the contour curve of the vehicle located in lane two;
(c) the contour curve of the vehicle located in lane three.
www.intechopen.com
Fig. 5(a) is the left image of a stereo image pair for a
typical traffic scene to be analysed in this paper. Fig. 5(b)
is the disparity image generated from the edge-indexed
stereo matching, in which the colours of the point denote
the disparity value. The edge image is generated by a
Sobel operator only for the vertical direction. Fig. 5(c) is
the depth map (a bird’s eye view) using disparity
information in the fig. 5(b) and the object segmentation
results in the depth map. Fig. 5(d) is the corresponding
object segmentation resulting in the disparity map.
Shumin Liu, Yingping Huang and Renjie Zhang: On-Road Vehicle
Recognition Using the Symmetry Property and Snake Models
5
3.2 Contour Extraction and Refinement using
the Symmetry Properties and the Snake Model
Fig. 6 shows the process of contour curve extraction for
Obj3 by the symmetry properties and the snake model.
(a)
(a)
(b)
(d)
(b)
(e)
(c)
(f)
Figure 6. The process of contour curve extraction for Obj 3: (a)
the original image of Obj3; (b) the vertical edge image; (c) the
boundary contour image and the symmetry axis; (d) the initial
contour; (e) the iterative trajectory of the snake model; (f) the
refined contour curve of Obj 3.
(c)
(d)
Figure 5. The process of object segmentation: (a) the left image of
the stereo image pair; (b) the disparity image; (c) the depth map
with a lateral range from –8 m to 8 m, a distance range from 4 m
to 40 m, a range resolution of 0.2 m × 0.2 m, and points on the
road surface, have been discarded; (d) the segmentation objects
are outlined with a red rectangle.
6
Int. j. adv. robot. syst., 2013, Vol. 10, 407:2013
Fig. 6(a) is the original image of Obj3 located in the
adjacent lane. Fig. 6(b) shows the vertical edge image of
Obj3 segmented using the stereovision-based method. The
symmetry axis and boundary contour image extracted by
the general symmetry properties are shown in Fig. 6(c),
where the green dashed line is the initial symmetry axis
and the red line is the true symmetry line. Fig. 6(c) is the
result generated until step 3 in section II-C-2. Fig. 6(d)
shows the initial contour image of Obj3 after connecting
the top and bottom horizontal symmetry point pairs (step
4). It can be seen that the initial contour is discontinuous
and contains some noise points that do not lie in the object
boundary. Therefore, the snake model is applied to refine
the initial contour so that a closed contour can be generated,
as shown in Fig. 6(f). Fig. 6(e) shows the process of
extracting the contour curve by the snake model, where a
series of red curves are the iterative trajectory for the initial
curve and the curve moves towards the boundary
continuously under the action of external forces.
Fig. 7 shows the process of contour curve extraction for
Obj2. The results for each stage are the same as for Fig. 6.
It can be seen that the initial symmetry axis (green) and
the true symmetry axis (red) are almost coincident for
Obj2 and are far apart from each other for Obj3. This is
because Obj2 is a direct rear image with a strict symmetry
www.intechopen.com
while Obj3 is observed with a side view angle resulting in
a relatively loose symmetry.
(a)
(b)
(d)
(c)
(e)
(f)
Figure 7. The process of contour curve extraction for Obj2: (a) the
original image of Obj2; (b) the vertical edge image; (c) the
boundary contour image and symmetry axis; (d) the initial
contour of Obj2; (e) the iterative trajectory of the snake model;
(f) the refined contour curve of Obj2.
In order to verify the generality of the approach, multiple
classes of vehicles are analysed in this work. The
processes of contour curve extraction for different classes
of vehicles are shown in Fig. 8 and Fig. 9. Fig. 8 shows the
process of contour curve extraction for a small car. Here,
Fig. 8(a) presents a slight side view for a sedan located in
the adjacent lane. Fig. 8(b) ~ Fig. 8(d) are the direct rear
views for moving vehicles located in the same lane. Fig. 8
(a1) ~ Fig. 8 (d1) show the corresponding symmetry axis
and the boundary contours. Fig. 8(a2) ~ Fig. 8(d2) are the
extracted contour curves.
(a)
(b)
(c)
(d)
(a1)
(b1)
(c1)
(d1)
(a2)
(b2)
(c2)
(d2)
Figure 8. The process of contour curve extraction for small cars:
(a) the original image for a moving car located in an adjacent
lane; (b) ~ (d) the original image for small cars with a direct rearview; (a1) ~ (d1) the corresponding boundary contours and
symmetry axis; (a2) ~ (d2) the extracted contour curves.
Fig. 9 shows the process of contour curve extraction for
mid-sized vehicles and larger vehicles. Fig. 9(a) and Fig. 9(b)
present slight side views of buses. Fig. 9(c) and Fig. 9(d)
www.intechopen.com
show the bus and truck located in the same lane as the
observing vehicle.
(a)
(b)
(c)
(a1)
(b1)
(c1)
(a2)
(b2)
(c2)
(d)
(d1)
(d2)
Figure 9. The process of contour curve extraction for mid-sized
vehicles and larger vehicles: (a), (b) the original image for a
moving bus located in an adjacent lane; (c) the original image for
a mid-sized bus with a direct rear-view; (d) the original image
for a truck with a direct rear-view; (a1) ~ (d1) the corresponding
boundary contours and symmetry axis; (a2) ~ (d2) the extracted
contour curves.
From the results shown in Fig. 6(c), Fig. 8(a2), Fig. 9(a2)
and Fig. 9(b2), it can be seen that the initial contour can be
extracted successfully for a vehicle located in the adjacent
lane even though their appearance is no longer strictly
symmetrical. For all of them, the initial symmetry axis is
far from the true symmetry axis because of the
asymmetry of the side view shapes of the vehicles.
However, the initial symmetry axis almost overlaps with
the true symmetry axis for all the direct rear view images.
This proves that the method proposed here is effective for
extracting the true symmetry axis and object contour
curves.
3.3 Vehicle Recognition
(a)
(e)
(b)
(f)
(c
(g)
(d)
(h)
Figure 10. Vehicle samples and their contour curve: (a) a small
car; (b) a mid-sized vehicle; (c) a larger vehicle; (d) an oblique
view of a small car located in an adjacent lane; (e) ~ (h) the actual
contour curves of the vehicle samples.
Shumin Liu, Yingping Huang and Renjie Zhang: On-Road Vehicle
Recognition Using the Symmetry Property and Snake Models
7
Fig. 10 shows some of the standard vehicle samples and
their contour curves. It can be seen that the Rhw of a sedan
is smaller than that of a mid-sized vehicle and a larger
vehicle. The larger the vehicle is, the larger the Rhw is. For
the same class of vehicle, the one located in the adjacent
lane has a smaller Rhw than that located in the same lane.
Therefore, we can calculate Rhw1 and Rhw2 according to the
size of the vehicle, the viewing angle β and equation (6),
in order to determine a range of the Rhw of the vehicle
class. A comprehensive analysis of different types of
vehicles and viewing angle factors give the result that Rhw
must be within 0.4 ~ 1.4 for a vehicle-type object.
Furthermore, Ra is set to be greater than 0.7 for a vehicletype object. In this work, a detected object is classified
into two types - vehicle or non-vehicle.
In this work, a total of 200 obstacles captured in different
driving scenarios from a direct front/rear view are tested,
and all of their contour curves can be extracted
successfully. The verification results are as follows.
Table 1 shows the verification results only using Rhw as a
criterion.
Objects
Actual number
Correct number
Correction rate
(%)
Small car
60
56
93.3%
Vehicles
Mid-sized
vehicle
52
47
90.4%
Large
vehicle
44
40
88.6%
91.7%
Nonvehicle
44
31
70%
Table 1. Recognition results using Rhw
Table 2 shows the verification results only using Ra as a
criterion.
Objects
Actual number
Correct number
Correction rate (%)
Small car
60
52
86.7%
Vehicles
Mid-sized
vehicle
52
43
82.7%
85.2%
Large
vehicle
44
38
86.3%
Nonvehicle
44
39
88.7%
Table 2. Recognition results using Ra
It can be inferred from Table 1 and Table 2 that the
correction rate for positive samples (vehicles) using Rhw is
higher than that using Ra, but it is lower for negative
samples (non-vehicles).
Table 3 shows the verification results using both Rhw and Ra.
As shown in Table 3, the correction rate is much
improved for both positive samples and negative samples
using both Ra and Rhw. Table 4 show the results for
vehicles located in the adjacent lane.
8
Int. j. adv. robot. syst., 2013, Vol. 10, 407:2013
Objects
Small
car
Actual number
60
Correct number
57
Correction rate
95%
(%)
Vehicles
Mid-sized
vehicle
52
49
94.2%
Large
vehicle
44
41
93.1%
94.2%
Non-vehicle
44
41
93.1%
Table 3. Recognition results using Ra and Rth
Objects
Small
car
Actual number
30
Correct number
27
Correction rate
90%
(%)
Vehicles
Mid-sized
vehicle
22
19
86.4%
85.7%
Large
vehicle
18
14
77.8%
Non-vehicle
13
14
92.9%
Table 4. Recognition results for vehicles in the adjacent lane
using Ra and Rhw
As shown in Table 4, the correction rate for vehicles in an
adjacent lane is slightly smaller than that for vehicles in
the same lane. However, it is still as high as 85.7%. The
correction rate for ‘Non-vehicle’ objects in adjacent lanes
is 92.9% and does not represent a significant difference to
those in the same lane. This is because ‘Non-vehicle’
objects like pedestrians or traffic posts do not have a
significant ‘thickness’; therefore, their contours do not
make a significant difference, whether they are in the
same lane or not.
The results show that Rhw and Ra are useful criteria for
vehicle recognition. Rhw has an advantage in the
identification of positive samples and Ra is more suitable
for identifying non-vehicles. However, the correction rate
in integrating both Rhw and Ra is much improved for both
positive samples and negative samples. Moreover, the
correction rate for small cars is higher than that for midand large-sized vehicles.
4. Conclusions
This paper presents a vehicle recognition approach for
intelligent vehicles. The approach identifies a vehicle based
on the object geometry contour shape. The approach
integrates stereovision with a novel object contour
extraction algorithm. Stereovision is used to segment
objects from clustered road scene images and locate an
object region. An object contour is initially extracted by
using a vehicle symmetry property and is further refined
by a snake model algorithm. An improved snake model - a
potential distance model - is adapted and the potential
force function is defined as the negative gradient of the
Euclidean distance between the image point and the
nearest boundary. This snake model effectively expands
the range of external force and improves the ability to
extract contours for edges with gaps. After the object
contour is obtained, two shape factors including an aspect
www.intechopen.com
ratio and an area ratio are calculated and used for the
classification of the object detected. The approach is
verified by substantial scenario images. The experimental
results demonstrate that the system is able to successfully
track and classify most vehicles observed from the
rear/front view and the side view. For a total of 200 direct
rear/front view obstacles captured in different driving
scenarios, 94.2% of the vehicles were correctly identified
and 93.1% of non-vehicles were correctly identified. In
addition, the correction rate for vehicles in adjacent lanes
was 85.7%. The system was implemented on a normal
Pentium dual-core 3.0 GHz PC at a processing rate of 8
frames/sec. The main computation cost comes from the
snake model algorithm. However, the system speed could
be greatly improved by using special image processing
chips, which will constitute our future work.
5. Acknowledgements
This work was sponsored by the National Natural Science
Foundation of China (Project No. 61374197), the Science
and Technology Commission of Shanghai Municipality
(Project No. 13510502600) and supported by the
Programme for Professors of Special Appointment (Eastern
Scholar) at the Shanghai Institutions of Higher Learning.
6. References
[1] Sun Z., Bebis G., Miller R. (2006) On-road Vehicle
Detection: a Review, IEEE Transactions on Pattern
Analysis and Machine Intelligence, vol. 28, no. 5, pp.
694-711.
[2] Zhou J., Duan J. (2012) Machine-vision Based
Preceding Vehicle Detection Algorithm: a Review.
Proceedings of the 10th world congress on intelligent
control and automation, pp. 4617-4622.
[3] Hoffmann C., Dang T. and Stiller C. (2004) Vehicle
detection fusing 2D visual features. Proceedings of
IEEE Intelligent Vehicles Symp., Parma, Italy, pp.
280-285.
[4] Xiong T., Debrunner C. (2004) Stochastic car tracking
with line- and color-based features. IEEE Trans.
Intell. Transport. Syst., vol. 5, no. 4, pp. 324-328.
[5] Zhang L. (2010) Research on Front Vehicle Detection
and Tracking based on Multiple Features. Jiangsu
University pp. 26-30 (in Chinese).
[6] Mei X., Zhoug S. and Kevin (2007) Integrated
detection, tracking and recognition for IR videobased vehicle classification. Journal of Computers,
vol. 2, no. 6, pp. 1-9.
[7] Wu J., Xia J. (2011) Adaptive detection of moving
vehicle based on on-line clustering. Journal of
Computers, vol. 6, no. 10, pp. 2045-2052.
[8] Marola G. (1989) Using Symmetry for Detecting and
Locating Objects in a Picture. Computer Vision,
Graphics and Image Processing, vol. 46, pp. 179-195.
www.intechopen.com
[9] Kuehnle A. (1991) Symmetry-based Recognition for
Vehicle Rears. Pattern Recognition Letters, vol. 12,
pp. 249-258.
[10] Bertozzi M., Broggi A. and Fascioli A., (2000) Visionbased Intelligent Vehicles: State of the Art and
Perspectives. Robotics and Autonomous Systems,
vol. 32, pp. 1-16.
[11] Lian J., Zhao C. and Zhang B. (2012) Vehicle Detection
based on Information Fusion of Vehicle Symmetrical
Contour and License Plate Position. Journal of
Southeast University, vol. 28, no. 2, pp. 240 - 244.
[12] Huh K., Park J. (2008) A Stereo Vision-based Obstacle
Detection System in Vehicles. Optics and Lasers in
Engineering, vol. 2008, pp. 168 -178.
[13] Lu Z. (2007) Research on Optical Flow Computation
for Miton Image Analysis. Xi’an: Xidian University
(in Chinese).
[14] Lan J., Zhang M. (2010) A New Vehicle Detection
Algorithm for Real-time Image Processing System.
IEEE International Conference on Computer
Application and System Modeling, pp. 101-104.
[15] Wu J., Xia J. (2011) Moving Object Classification
Method based on SOM and K-means. Journal of
Computers, vol. 6, no. 8, pp. 1654-1661.
[16] Sayanan S., Trivedi M. M. (2009) Active Learningbased Robust Monocular Vehicle Detection for Onroad Safety Systems. IEEE Intelligent Vehicles
Symposium, pp. 399-404.
[17] Mao L., Xie M., Huang Y., et al. (2010) Preceding
Vehicle Detection Using Histograms of Oriented
Gradients.
International
Conference
on
Communication, Circuits and systems, pp. 354-358.
[18] Huang Y. (2005) Obstacle Detection in Urban Traffic
using Stereo-vision. IEEE International Conference
on Intelligent Transportation Systems, pp. 13-16,
Vienna, Austria.
[19] Huang Y., Fu S. and Thompson C. (2005)
Stereovision-based
Object
Segmentation
for
Automotive Applications. EURASIP Journal on
Applied Signal Processing, vol. 14, pp. 2322-2329.
[20] Kass M., Witkin A. and Terzopoulos D. (1987)
Snakes: Active Contour Models. International
Journal of computer vision, vol. 1, no. 4, pp. 321-331.
[21] Chen B., Lai J. (2007) Active Contour Models on
Image Segmentation: a Survey. Journal of Image and
Graphic, vol. 1, no. 12, pp. 11-20.
[22] Michal S., Ohad B. S. (2011) Free Boundary
Conditions Active Contours with Applications for
Vision. International Conference on ISVC, pp. 180191.
[23] Cohen L. D., Cohen I. (1993) Finite-element Methods
for Active Contour Models and Balloons for 2-D and
3-D images. IEEE Transactions on Pattern Analysis
and Machine Intelligence, vol. 15, no. 11, pp. 11311147.
Shumin Liu, Yingping Huang and Renjie Zhang: On-Road Vehicle
Recognition Using the Symmetry Property and Snake Models
9