White line following for ALV navigation

Transactions on Information and Communications Technologies vol 16, © 1996 WIT Press, www.witpress.com, ISSN 1743-3517
White line following for ALV navigation
W. Lu, X. Q. Ye & W. K. Gu
Institute of Information & Intelligent System, Zhejiang University,
Hangzhou 310027, P. R. China
Email: [email protected]
Abstract
A real time processing system and a white line following algorithm used for high
speed ALV navigation on structural road are described in this paper. The real
time image processing system includes a color video camera, several processors
with special functions and a general purpose processing unit based on TMS
320C30 microprocessor. Each special processor performs a standard image
processing operation such as image thresholding, color look-up-table, histogram
statistics, morphology operation, and so on. A general purpose unit performs
some algorithms at high speed. There are two buses in the system. Each processor can communicate with each other through image data bus and communicate with the host computer through extended PC-bus as well. The system is
more convenient for programming and can realize vision tasks in real time.
There are two steps in the white line following algorithm. The white line starting points are searched by a local threshold and hypothesis-verification method.
We enhance the performance of our searching algorithm by three steps: enlarge
searching area, use as many features of the white line as possible, feedback line
following results to starting point searching for remove illegal starting points.
In the window searching scheme, we use multi-directional local projection to
find the two gray level discontinuity edges of white line in a window. According
to the position orientation and intensity of the two discontinuity edges, we can
determine if there is a white line segment in the window. Then we forecast the
position of the next window and repeat the steps until the line following is finished according to some rules. We use continuity between two frame images
and knowledge of central line of road to guide road region determination when
there is one white line in an image. The testing results show that our vision
system can detect solid and broken, clean and blurred white lines on the testing
field successfully in any weather conditions and at any time of a day.
Transactions on Information and Communications Technologies vol 16, © 1996 WIT Press, www.witpress.com, ISSN 1743-3517
1 Introduction
A powerful robot vision system, together with fast, robust image processing
algorithms which enable an autonomous land vehicle (ALV) to act in a realworld environment, is a research project at our university. The task of a ALV
vision system is to provide a description of the road in front of the vehicle for
navigation. The possible road for a vehicle may be divided into two broad
categories: structural road with road markers [1,2] and unstructured road
without special markers [3-5]. In this paper, we introduce a real time image
processing system and a white line following algorithm used for high speed
ALV navigation on structural road. The testing results of the algorithm are also
presented.
2 System configuration
Figure 1 is an overview of our image processing system. The image sensor is a
color video CCD camera, which provides 256×256 red, green and blue images
with 8 bits of intensity per image. The field of view and focus of the camera are
kept fixed. The system contains several special purpose processors (SP in
Figure 1), each of them performs a standard image processing operation (such
as image thresholding, look-up-table, or histogram statistics)with video frame
rate (1/25 second). The system also contains a general purpose processing unit
(GP in Figure 1) based on TMS320C30 microprocessor. It can perform some
algorithms at high speed. Each processor communicates with each other
through an image data bus, the time clock on the data bus is 10 MHz. Each
processor can also communicate with the host computer through a PC bus.
Figure 1 Image processing system configuration
The processors can be programmed by the host computer and be organized
to a pipeline structure or a parallel structure suit for some image processing and
understanding tasks. For ALV road following, the system is available for both
unstructured and structural roads. The paper [5] introduces a robust image
Transactions on Information and Communications Technologies vol 16, © 1996 WIT Press, www.witpress.com, ISSN 1743-3517
processing approach for ALV road following on unstructured roads with this
system. In this paper, we illustrate the algorithm of white line detecting for
structural roads.
3 Algorithm description
The task of a ALV vision system is to provide a description of the road
adequate for navigation. The red, green and blue images come from the color
camera are reduced to a single band image through a color transform look-up
table. The white line following algorithm works on a 256×256 gray scale image.
For white line following, there are two important things: one is to search for
starting points of two white lines (a left one and a right one). If there is no
starting point, where can we follow the white line? The other thing is line
following from the starting point just found.
3.1 Starting point searching
The changing conditions of the lights, the shadows on the ground and the
interference from nearby things often make it difficult to find out starting points
always correctly. We enhance the performance of our searching algorithm by
three steps :
• Enlarge searching area.
• Use as many features of the white line as possible.
• Feedback Line following results to starting point searching, remove illegal
starting points.
Figure 2 illustrates white line starting point searching area. The searching
area will be below the horizontal line in the picture. Further more, we divide
this region into two halves by the road center line Mid[i] (Mid[i] is calculated in
the previous image, i is the row number of a certain image line ). We search left
line starting point in the left part and right line starting point in the right part.
Assume we are searching for the left line starting point. The searching
begins at the bottom row of the image and goes upwards row by row. First , we
find out the maximal and average gray level value of a row at the left side of
Mid[i] .
max[i] = max{I[i, j ]; j = 0,..., Mid[i] − 1}
ave[i] =
Mid [i ]− 1
∑ I[i, j]
j=0
Mid[i]
I[i, j ] represents the gray level value of a pixel in the image. Change the gray
level image to a binary image (with only '1' or '0' value ) according to:
1 if (I[i, j ] > ave[i] + θ 1 ) ∧ (I[i, j] > θ 2 ) ∧ (I[i, j ] > max[i] − θ 3 )
B[i, j ] = 
otherwise
0
Transactions on Information and Communications Technologies vol 16, © 1996 WIT Press, www.witpress.com, ISSN 1743-3517
Figure 2 White line starting point searching area
where θθθθ1θ and θθθθ3 are two thresholds which we set according to our road model,
θθθθθ
2 is the average gray level of the road surface we calculated from the pervious
image.
There are several line segments with gray level '1' in each row of binary
image. We find out the line segments that are comparable in width with white
lines on our test field and mark them with L(i,j) ( (i,j) is the coordinate of the
center point of that line segment). After these have been done, we use a
hypothesis-verification method to determine the position of a white line starting
point . Each line segment marked is a candidate of a starting point . Assume a
candidate T, we mark the rays starting from T, with slope φ as Rayφ . In
practice we take φφφφφ= 0° , 11. 25° , 22. 5° , 33. 75° , 45° , 56. 25° , 67. 5° , 78. 75° , 90° .
The value of a pixel in the square with shadows is represented by '1' (see
Figure 3).
Fig. 3 white line starting point searching
For a Rayφφφφφφ , count the number of times when the ray passes through a pixel on
some marked line segment within the nearest N rows below T, and record the
number as Cφφφφφφ. pick up the maximum from Cφφφφφφ
Transactions on Information and Communications Technologies vol 16, © 1996 WIT Press, www.witpress.com, ISSN 1743-3517
Cmax = max{Cφ}
then compare Cmax with a threshold ΘN . If Cmax ≥ ΘN , we accept T as a
white line starting point . If not, we continue our searching. In our algorithm ,
N=20 and ΘN =16.
3.2 White line window following
Window following follows a white line in the image in both upward and
downward direction from the starting point. A window size is 20×20. If there
is a white line segment in a window (see Figure 4), there will be a gray level
discontinuity edge from background to white line and a gray level discontinuity
edge from white line to background .
Fig. 4 two discontinuity edges of a
white line in a window
Fig. 5 sixteen directions of local
projection
We use multi-directional local projection to find out the two gray level
discontinuity edges in a window. Figure 5 shows the sixteen directions of the
projection. The result of the projection is:
Proj( k ; φφφφφ), k = 0, ⋅⋅⋅, K , φφφφφ= 0, ⋅⋅⋅,15
In each direction , there will be a rising intensity and a dropping intensity:
Intensity A bφφφφφg = max m projb k + 1; φφφφφg − projb k − 1; φφφφφgr k = 1,..., K − 1
Intensity B bφφφφφg = max m projb k − 1; φφφφφg − projb k + 1; φφφφφgr k = 1,..., K − 1
¡
¡
The maximum of Intensity A bφφφφφg corresponds to the discontinuity edge from
background to white line and maximum of Intensity B bφφφφφg corresponds to the
discontinuity edge from white line to background. When the position,
orientation and intensity of the two discontinuity edges have been calculated, we
can determine if there is a white line segment in the window. If there is, then
calculate the position and orientation of the white line and forecast the position
of the next window on the basis of the position. The calculation is repeated
until the line following finished according to some rules.
3.3 road center line marking
Transactions on Information and Communications Technologies vol 16, © 1996 WIT Press, www.witpress.com, ISSN 1743-3517
For structural roads, the road and non-road regions are divided by a series
of solid or broken white lines. If there are a pair of white lines in an image, then
the region between the two lines is the road. But if there is only a single line,
where is the road? Actually this situation often takes place especially when the
vehicle is making a turn or avoiding an obstacle.
It is not adequate to determine the road region in an image with one white
line. But notice that what we are processing are a series of continues images.
The correlation among these images can be used as a recurrent knowledge of
the road. With the knowledge we can handle the problem easily .
We calculate and remember the central line of the road in the image when
the white lines have been detected, and pass the central line (Mid[i]) to the next
image. In the next image, the central line divides the white line searching area
into a left part and a right part, the left starting point is searched in the left part
and the right starting point in the right part. So if there is only one white line in
the image, we also know if it is a left line or a right line, and which region in the
image corresponding to the road in front of the vehicle. Figure 6 illustrates the
situation we just discussed.
Figure 6 Find out road region in a series of images
4 Testing Results
The whole test runs over on the test field involves the acquisition and
processing of hundreds of images. These images vary from day to day and from
different time in one day, since weather, sun angle, shadows all have altered the
appearance of the road and it's surroundings.
The testing results show that our vision system and algorithm can detect
solid and broken, clear and blurred white lines on the soil ground of our testing
field successfully in any weather conditions and at any times of a day. Figure 7
is a typical image we met and the processing results.
Reference
[1] Dickmanns, E. D. & Zapp, A., A curvature-based scheme for improving
road vehicle guidance by computer vision, Proceedings SPIE Vol.127,
Mobile Robots, 161-168, 1986.
Transactions on Information and Communications Technologies vol 16, © 1996 WIT Press, www.witpress.com, ISSN 1743-3517
[2] Graefe, V., Vision for intelligent road vehicles, Intelligent Vehicles, Tokyo,
July, 1993
[3] Turk, M. & Morgenthaler, D., VITS: a vision system for autonomous land
vehicle navigation, IEEE Trans. on PAMI, Vol. 10, 342-361, 1988.
[4] Thorpe, C., Hebert, M. H. & Kanade, T., Vision and navigation for the
Carnegie-Mellon Navlab, IEEE Trans. on PAMI, Vol 10, 362-373, 1988.
[5] Gu, W. K., et al, A robust image processing approach used for ALV road
following, Proceedings 11th Intern. Conf. on Pattern Recog., Netherlands,
Vol. A, 676- 679, 1992.
Figure 7 Typical image and the following results