ELIMINATION OF RAINDROPS EFFECTS IN INFRARED SENSITIVE
CAMERA
AHMAD SHARMI BIN ABDULLAH
A project report submitted in partial fulfillment of the
requirements for the award of the degree of
Master of Engineering
(Electrical – Electronics & Telecommunications)
Faculty of Electrical Engineering
Universiti Teknologi Malaysia
APRIL 2007
iii
To my beloved mother and father
iv
ACKNOWLEDGEMENT
In the Name of Allah, Most Gracious, Most Merciful. I am grateful to Allah
for His guidance, and only by His strength I have successfully completed my master
project and the write up on this thesis.
I wish to express my sincere gratitude and appreciation to my supervisor,
Associate Professor Dr. Syed Abdul Rahman Al-Attas for his invaluable guidance,
assistance, advice and constructive comments throughout the accomplishment of this
project.
Recognition and thankfulness to Mr. Usman Ullah Sheikh and Mr. Amir for
the cooperation, encouragement and inspiration they gave all along the way to the
completion of this project.
Finally, I would like to thank my parent for their determined support,
encouragement and understanding. I am indebted to all these important peoples.
v
ABSTRACT
Surveillance systems are important part of the security systems nowadays.
The traditional methods of surveillance systems involving human now have
improved to automated systems. The effects of rain brought some drawback to the
automated surveillance systems especially during rainy night time which degrade the
performance of the tracking system. This project has proposed a method to eliminate
those raindrops effects in order to improve the performance of the tracking system in
the automated surveillance systems. An algorithm has been developed using
MATLAB® Image Processing Toolbox. Unique visual properties of raindrops are
observed and analyzed which then has been manipulated into the algorithm as a
mechanism for raindrops effects removal. The result produced as a comparison to its
original input shows a significant raindrops effects elimination.
vi
ABSTRAK
Pemantauan atau pemerhatian merupakan suatu perkara yang penting dalam
sistem keselamatan. Kaedah tradisional dalam sistem pemantauan yang melibatkan
pegawai keselamatan untuk berjaga, meronda dan memerhati telah digantikan
dengan sistem pemantauan automatik yang menggunakan kamera pemantau dan unit
pemprosesan digital. Namun, kesan hujan telah membawa beberapa kesan buruk
kepada sistem pemantauan automatik terutamanya hujan ketika waktu malam yang
mana menyebabkan prestasi sistem pengesan menurun. Projek ini telah
mencadangkan suatu kaedah untuk menghilangkan kesan hujan tersebut dalam usaha
untuk memperbaiki prestasi sistem pengesan dalam sistem pemantauan automatik.
Suatu algoritma telah dibangunkan menggunakan MATLAB® Image Processing
Toolbox. Ciri-ciri visual hujan yang unik diperhatikan dan dianalisis, kemudiannya
dimanipulasikan ke dalam algoritma tersebut sebagai suatu mekanisme untuk
menghilangkan kesan hujan. Keputusannya, imej yang telah diproses, sebagai
perbandingan kepada imej yang asal menunjukkan kesan hujan telah berjaya
dihilangkan dengan baik.
vii
TABLE OF CONTENTS
CHAPTER
1
2
TITLE
PAGE
DECLARATION
ii
DEDICATION
iii
ACKNOWLEDGEMENT
iv
ABSTRACT
v
ABSTRAK
vi
TABLE OF CONTENTS
vii
LIST OF FIGURES
viii
LIST OF SYMBOLS
x
LIST OF ABBREVIATIONS
xi
INTRODUCTION
1
1.1 Problem Statement
2
1.2 Objective of Project
2
1.3 Scope of Project
2
LITERATURE REVIEW
4
2.1 Real-Time Processing
5
2.1.1 Analysis on Visibility of Rain
6
2.1.2
Camera Parameters for Rain Removal
10
2.1.3
Summary
12
2.2 Offline Processing
12
2.2.1
Physical Properties of Rain
13
2.2.2
Appearance Model of Rain
14
viii
2.2.3
3
4
2.2.2.1 Dynamic of Rain
14
2.2.2.2 Photometry of Rain
16
Detection of Rain in Video
19
2.2.3.1 Photometrics Model Constraints
20
2.2.3.2 Dynamics Model Constraints
21
2.2.4 Removal of Rain from Video
22
2.2.5
23
Summary
METHODOLOGY
24
3.1 Input and Output
25
3.2 Algorithm Development Process
27
3.2.1 Observation
27
3.2.2
Analysis
28
3.2.3
Algorithm
30
3.2.4
Experiment
33
RESULT AND ANALYSIS
34
4.1 Results of Algorithms Processes
34
4.1.1
1st Version Algorithm
nd
4.1.2
2 Version Algorithm
37
4.1.3
3rd Version Algorithm
39
4.2 Results of Multiple Raindrops Visual Conditions
5
35
41
4.2.1
Normal Spread Raindrops
41
4.2.2
Overlapping Spread Raindrops
41
4.2.3
Extreme Overlapping Raindrops
42
4.3 Analysis and Comparison
46
CONCLUSIONS AND FUTURE WORK
49
5.1 Conclusion
49
5.2 Future Work
50
REFERENCES
51
viii
LIST OF FIGURES
FIGURE NO.
TITLE
PAGE
2.1
Intensity fluctuations in image
7
2.2
Pixel looking at raindrops at different distances, z
8
2.3
Various conditions of rain scenarios
11
2.4
Drops size distribution and shapes
13
2.5
Temporal correlations between pixels and its
neighbors
16
2.6
The field of view of a raindrop
16
2.7
Average irradiance at a pixel due to rain drop
17
2.8
Positive intensity change of unit frame width at a pixel
19
2.9
The rain detection algorithm applied to a video
22
3.1
Components of algorithm development
24
3.2
Input sample of algorithm development process
26
3.3
Output sample of algorithm development process
26
3.4
Algorithm development process
27
3.5
Three consecutive frames of images sequence
29
3.6
Flowchart of the algorithm
31
4.1
Sample input scene frames of 1st version algorithm
35
4.2
The change of intensity, ΔI
35
ix
4.3
The artifact of background objects
36
4.4
The artifact of raindrop
36
4.5
The output of 1st version algorithm
36
4.6
Sample input scene frames of 2nd version algorithm
37
4.7
The change of intensity, ΔI
37
4.8
The artifact of background objects
38
4.9
The artifact of raindrop
38
4.10
The output of 2nd version algorithm
38
4.11
Sample input scene frames of 3rd version algorithm
39
4.12
The change of intensity, ΔI
39
4.13
The artifact of background objects
40
4.14
The artifact of raindrop
40
4.15
The output of 3rd version algorithm
40
4.16
Sample frames of Normal Spread Raindrops condition
42
4.17
Sample frames of Overlapping Spread Raindrops
condition
42
Sample frames of Extreme Overlapping Raindrops
condition
42
Results and Intensity Profiles of Normal Spread
Raindrops
43
Results and Intensity Profiles of Overlapping Spread
Raindrops
44
Results and Intensity Profiles of Extreme Overlapping
Raindrops
45
4.18
4.19
4.20
4.21
x
LIST OF SYMBOLS
a
-
Radius
bc
-
Diameter of defocus kernel (blue circle)
c
-
Threshold value
E
-
Irradiance
f
-
Focal length
I
-
Intensity
k
-
Camera gain
L
-
Luminance
N
-
F-number
n
-
Frame number
r
-
Spatial coordinate
R
-
Temporal correlation
T
-
Camera exposure time
t
-
Time
v
-
Velocity
w
-
Width
z
-
Distance
β
-
Slope
Δ
-
Different
τ
-
Time
xi
LIST OF ABBREVIATIONS
AVI
-
Audio Video Interleave
NVD/NVDs
-
Night Vision Device/s
RGB
-
Red blue green
1
CHAPTER 1
INTRODUCTION
Automatic surveillance system is an important system since it involves the
security and safety of the surrounding automatically. One of the important features in
an automatic surveillance system is the ability of the system to automatically
tracking the objects of interest in the scene. This system involves the used of
surveillance camera and digital image processing unit instead of human to monitor
the surrounding area of interest and it is proven to perform better than the human to
some extent.
The surveillance camera used is the infrared sensitive camera that has night
vision built right in. Night vision or sometimes called night vision devices (NVDs)
rely on a special tube, called an image-intensifier tube, to collect and amplify
infrared and visible light. A projection unit, called an IR Illuminator, is attached to
the NVD. The unit projects a beam of near-infrared light, similar to the beam of a
normal flashlight. Invisible to the naked eye, this beam reflects off objects and
bounces back to the lens of the NVD and eventually make the camera to “see” at
night.
2
1.1
Problem Statement
The ability of infrared sensitive camera to “see” at night does bring some
problems to the tracking system. One of the major problems encountered in detecting
moving objects at night time using infrared sensitive camera is the presence of
raindrops. Due to its reflective surface, raindrops, especially those near the camera
lens, will appear as very bright moving objects. As a consequence, these raindrops
will be detected as valid moving objects which in return increasing the false
detection rate of the tracking system.
1.2
Objective of Project
The objectives of this project are to develop, simulate and analyze an
algorithm that will remove raindrops effects using MATLAB® Image Processing
Toolbox and to discriminate the raindrops effects from the scene captured by the
infrared sensitive camera so that effective detection and tracking of moving objects
can be undertaken.
1.3
Scope of Project
This project will make use of the MATLAB® Image Processing Toolbox as
the algorithm development platform. Images sequence captured by an infrared
sensitive camera is used as the input material for the development process. This
images sequence is a night scene of moving objects with the interference of moderate
rain condition. The processing will be done offline where the input material is first
3
captured before being processed. The process will involve the frame level processing
where frame by frame is observed and analyzed in order to develop an algorithm for
raindrops effects elimination.
CHAPTER 2
LITERATURE REVIEW
Outdoor vision systems are used for various purposes such as tracking,
recognition, navigation and etc. [K. Garg, 2004]. These systems rely on the
performance of the processing technique used, so that making them to work properly.
Therefore it is essential to have clear vision so that the performance of the systems
could be maintained. However, all those systems are currently designed without
account the various weather conditions. Rain, snow, fog and mist are the typical
weather conditions that need to be taken into account while designing the outdoor
vision systems. It is because these weather conditions severely degrade the quality of
the images captured in the scene. As consequence, the vision systems will fail to
work properly. In order to develop vision systems that perform under all weather
conditions, it is essential to model the visual effects of the various weather conditions
and develop algorithms to remove them.
Weather conditions vary widely in their physical properties and in the visual
effects they produce in images. Based on their differences, weather conditions can be
broadly classified as steady (fog, mist and haze) or dynamic (rain, snow and hail). In
the case of steady weather, individual droplets are too small (1 – 10 um) to be visible
to a camera, and the intensity produced at a pixel is due to the aggregate effect of a
large number of droplets within the pixel’s solid angle. Hence, volumetric scattering
5
models such as attenuation and airlight [E.J. McCartney, 1975] can be used to
adequately describe the effects of steady weather. Algorithms [S.K. Nayar, 2002]
have been recently developed to remove the effects of steady weather from images.
On the other hand, the constituent particles of dynamic weather conditions
such as rain, snow and hail are larger (0.1 – 10 mm) and individual particles are
visible in the image. An example of rain is that an individual raindrops cause the
long white streaks to appear in the images. Here, aggregate scattering models
previously used for steady conditions are not applicable. The analysis of dynamic
weather conditions requires the development of stochastic models that capture the
spatial and temporal effects of a large number of particles moving at high speeds (as
in rain) and with possibly complex trajectories (as in snow) [K. Garg, 2004].
There are a few research have been done regarding the problem of the
dynamic weather condition in images. Here, the discussion will focus on the images
corrupted by the rain effects problem. Based on a number of readings done, there are
two ways of implementing the raindrops effects elimination. The two ways are the
real-time processing technique and the offline processing technique. Each technique
has its own advantages and disadvantages.
2.1
Real-Time Processing
The real-time processing technique is a process that is done at the time which
the images are being captured. That is why it is named “real time” processing, where
no more post processing required for the images. Based on a research done regarding
this image processing technique, the process involved the exploitation of a few
cameras’ parameters depending on the properties of rain and the brightness of the
6
scene. In order to make sure that those parameters will be properly exploited, the
properties of rain and the brightness of the scene are important to be well understood.
Rain produces sharp intensity fluctuations in images and videos, which
degrade the performance of outdoor vision systems. These intensity fluctuations
depend on the camera parameters, the properties of rain and the brightness of the
scene. The properties of rain which are its small drop size, high velocity and low
density, make its visibility strongly dependent on camera parameters such as
exposure time and depth of field. These parameters can be selected so as to reduce or
even remove the effects of rain without altering the appearance of the scene.
Conversely, the parameters also can be set to enhance the visibility of rain.
Since this technique required exploitation of the cameras’ parameter,
therefore, the important key to this work is to get control of those parameters during
image acquisition. In many outdoor vision settings, it is appeared that those
parameters can easily being controlled and manipulated. Is that so, then the work
proceed to the following key of contributions.
2.1.1
Analysis on Visibility of Rain
Rain consists of a large number of drops falling at high speed. These drops
produce high frequency spatio-temporal intensity fluctuations in videos. The relation
of the visibility of rain to the camera parameters, the properties of rain and the scene
brightness can be derived with an analytical expression. To do this, the intensities
produces by individual drops will be first modeled then followed by considering the
effects due to a volume of rain.
7
Figure 2.1 Intensity fluctuations in image.
Raindrops fall at high velocities relative to the exposure time of the camera,
producing severely motion-blurred streaks in images. Also due to the limited depth
of field of a typical camera, the visibility of rain is significantly affected by defocus.
For deriving the intensities produced by motion-blurred and defocus, the camera is
assumed to have linear radiometric response. The intensity I at a pixel is related to
the radiance L as,
I =k
π 1
4 N2
TL,
(2.1)
where, k is the camera gain, N is the F-number and T is the exposure time. The gain
can be adjusted so that image intensities do not depend on specific N and T settings.
This implies that k should change such that k0 is constant, where,
k0 = k
π T
4 N2
.
Therefore, the image intensity can be written as I = k0L.
(2.2)
8
Figure 2.2 Pixel looking at raindrops at different distances, z.
Now, the change in intensities produced by motion-blurred could be derived
based on Figure 2.2., where its shows the change in intensity produced by a falling
raindrop is a function of the drops’ distance z from the camera. The change in
intensity ΔI produced by these drops is given by
ΔI = I r − I b = k 0
τ
T
( Lr − Lb ),
(2.3)
where, Ir is the motion-blurred intensity at a pixel affected by rain, and Ib = k0Lb is
the background intensity. Lr and Lb are the brightness of the raindrop and the
background, respectively, and T is the exposure time of the camera. τ ≅ 2a / v is the
time that a drop stays within the field of view of a pixel and v is the drops’ fall
velocity. The equation shows that change in intensity produced by drops in region
z<zm decreases as 1/T with exposure time and does not depend on z.
On the other hand, the change in intensity produced by drops far from camera
that is z>zm is given by
9
ΔI = k 0
4 fa 2 1
( Lr − Lb ).
zv T
(2.4)
It is shown that the change in intensity ΔI now depends on drops’ distance
from the camera and decreases as 1/z. However, for distances greater than Rzm, ΔI is
too small to be detected by the camera. Therefore, the visual effects of rain are only
due to raindrops that lie close to the camera (0<z<Rzm) which referred to as the rain
visible region.
If the motion-blurred effects are related to the drops’ distance from the
camera, the defocus effects on the other hand, is related to the limited depth of field
(focal length) of the camera. It can be approximated as a spreading of change in
intensity produced by a focused streak uniformly over the area of a defocused streak.
Hence, the change in intensity ΔI d due to a defocused drop is related to the change in
intensity ΔI of a focused streak as
ΔI d =
w(viT )
A
ΔI =
ΔI ,
d
A
( w + bc )(viT + bc )
(2.5)
where, A and Ad are the areas of the focused and the defocused rain streak,
respectively, w is the width of the focused drop in pixels, bc is the diameter of the
defocus kernel (blur circle), vi is the image velocity of the drop, and T is the exposure
time of the camera. Since raindrops fall at high velocity, it can be assume that
viT>>bc. Hence, the above expression simplifies to
ΔI d =
w
ΔI .
w + bc
(2.6)
Therefore, the intensity change produced by a defocused and motion-blurred
raindrop can be derived by simply substituting ΔI from equation (2.3) for drop that
lies close to the camera (z<zm).
10
ΔI d =
w τ
( Lr − Lb ).
w + bc T
(2.7)
and substituting w=1 and ΔI from equation (2.4) for drop that lies in the region z>zm.
ΔI d =
2.1.2
1 fa 2 1
( Lr − Lb ).
bc + 1 zv T
(2.8)
Camera Parameters for Rain Removal
There are a few ways of manipulating the cameras’ parameters that have been
discussed before to remove raindrops effects. However, not all of those parameters
need to be manipulated at the same time. Parameters that need to be set are
dependent on the condition of the scene to be captured. Various conditions of scene
that need to be accounted are such as scene with fast moving objects or slow moving
objects, scene that is far from camera or close to the camera and scene with heavy
rain.
Figure 2.3 shows some common scenarios where rain produces strong effects
and the result on rain removal by manipulating the camera parameters. Note that in
all these cases the effects of rain were reduced during image acquisition and no postprocessing was needed. Also, the visual effects of rain were reduced without
affecting the scene appearance.
11
Figure 2.3 Various conditions of rain scenarios.
12
2.1.3
Summary
This work has derived analytical expressions that show how the visibility of
rain is affected by factors such as camera parameters, properties of rain and the
brightness of scene. It is shown that the strong dependence of the visibility of rain on
camera parameters can be exploited to provide a simple and effective method to
reduce the effects of rain during image acquisition. However, this method is not as
effective in reducing rain from scenes with very heavy rain or scenes with fastmoving objects that are close to the camera. In such cases, post-processing might be
required to remove its effects.
2.2
Offline Processing
Offline processing technique is a process that is done after all the entire
image sequences have been captured. Those images then will be analyzed to acquire
the suitable process algorithm needed to enhance the images. Based on a research
done using this technique, the processes involve are the detection process and the
removal process of raindrops effects in image sequence. Both processes required the
comprehensive analysis of the visual effects of rain on imaging systems through
understanding of the raindrops physical properties and characteristics such as spatial
distribution, shapes, size and velocities. [K. Garg, 04]
Rain consists of a distribution of a large number of drops of various sizes,
falling at high velocities. Each drop behaves like a transparent sphere, refracting and
reflecting light from the environment towards the camera. An ensemble of such
drops falling at high velocities results in time varying intensity fluctuations in images
and videos. In addition, due to the finite exposure time of the camera, intensities due
to rain are motion blurred and therefore depend on the background. Thus, the visual
13
manifestations of rain are a combined effect of the dynamics of rain and the
photometry of the environment.
2.2.1
Physical Properties of Rain
Rain is a collection of randomly distributed water droplets of different shapes
and sizes that move at high velocities. The physical properties of rain have been
extensively studied in atmospheric sciences. The size of a raindrop typically varies
from 0.1 mm to 3.5 mm. The density of drops decreases exponentially with the drop
size. The shape of a drop can be expressed as a function of its size. Smaller raindrops
are generally spherical in shape while larger drops resemble oblate spheroids.
Figure 2.4 Drops size distribution and shapes.
In a typical rainfall, most of the drops are less than 1 mm in size. Hence, most
raindrops are spherical. Therefore, this approximation in size is used to model the
raindrops. As a drop falls through the atmosphere, it reaches a constant terminal
velocity. The terminal velocity v of a drop is also related to its size a and is given by
v = 200 a,
where a is in meters and v is in meters/s.
(2.9)
14
The individual raindrops are distributed randomly in 3D space. This
distribution is usually assumed to be uniform. Moreover, it can be assumed that the
statistical properties of the distribution remain constant over time. These assumptions
are applicable in most computer vision scenarios.
2.2.2
Appearance Model of Rain
The complex spatial and temporal intensity fluctuations in images produced
by rain depend on several factors that are drop distribution and velocities,
environment illumination and background scene, and the intrinsic parameters of the
camera. In order to model the appearance of rain, firstly, the correlation model that
captures the dynamics of rain based on the distribution and velocities of raindrops is
developed. Then followed by develop the physics-based motion blur model that
describes the brightness produced by streaks of rain.
2.2.2.1 Dynamic of Rain
The dynamics property of rain is useful to detect rain and its direction. This is
done by computing the temporal correlation between pixels and its neighbors. The
temporal correlation between pixels and any neighborhood is high in the direction of
rain. Therefore, the direction of rain can be determined.
15
In order to do this, the image projections of the drops are considered but not
their intensities. Thus, the dynamics of rain may be represented by a binary field
⎧1, if drop projects to location r at time t ;
b( r, t ) = ⎨
⎩0, otherwise,
(2.10)
where r represents the spatial coordinates in the image and t is time.
Initially, both space and time is considered to be continuous and the drops
distribution in a volume is assumed to be uniform over space and time. Under this
condition, the binary field b(r , t ) is wide sense stationary in space and time. This
implies that the correlation function depends only on difference in time (Δt = t1 − t 2 ) .
That is:
L
1
Rb (r1 , t1 ; r2 , t 2 ) ≡ ∫ b(r1 , t1 + t )b(r2 , t 2 + t )dt = Rb (Δ r , Δt ),
L0
(2.11)
where, the correlation Rb is computed over a large time period [0,L]. Rb (Δ r , Δt ) can
be computed by measuring the temporal correlation with time lag Δt between the
values of the binary field at points r and r + Δ r . An important constraint arises due
to the straight line motion of the drops. Consider a drop that falls with image
velocity vi . After time Δt , the displacement of this drop is vi Δt . Hence, the binary
field at time instants t and t + Δt are related as
b(r + vi Δt , t + Δt ) = b(r , t ).
(2.12)
As a result, the correlation Rb (r , t; r + vi Δt , t + Δt ) is high. From equation 2.11, yield
Rb (r , t ; r + vi Δt , t + Δt ) = Rb (vi Δt , Δt ).
(2.12)
16
This implies that the values of the binary field b at any two image coordinates,
separated by vi Δt in space are correlated with time lag Δt . This is illustrated in
Figure 2.5.
Figure 2.5 Temporal correlations between pixels and its neighbors.
2.2.2.2 Photometry of Rain
Raindrops behave like lenses refraction and reflecting scene radiances
towards the camera. They have a large field of view of approximately 165º and the
incident light that is refracted towards the camera is attenuated by only 6%.
Figure 2.6 The field of view of a raindrop.
17
Based on these optical properties of a drop, the following observations can be
made.
•
Raindrops refract light from a large solid angle of the environment
(including the sky) towards the camera. Specular and internal reflections
further add to the brightness of the drop. Thus, a drop tends to be much
brighter than its background (the portion of the scene it occludes).
•
The solid angle of the background occluded by a drop is far less than the
total field of view of the drop itself. Thus, in spite of being transparent,
the average brightness within a stationary drop (without motion-blur)
does not depend strongly on its background.
Falling raindrops produce motion-blurred intensities due to the finite
integration time of a camera. These intensities are seen as streaks of rain. Unlike a
stationary drop, the intensities of a rain streak depend on the brightness of the
(stationary) drop as well as the background scene radiances and integration time of
the camera.
Consider a video camera with a linear radiometric response and exposure
(integration) time T, observing a scene with rain. In order to determine the intensity,
Id produced at a pixel affected by a raindrop, the irradiance of the pixel over the time
duration T need to be examined.
Figure 2.7 Average irradiance at a pixel due to rain drop.
18
Figure 2.7 shows a raindrop passing through a pixel within the time interval
[tn, tn + T]. The time that a drop projects onto a pixel, τ is far less than T. Thus, the
intensity Id is a linear combination of the irradiance Ebg due to the background of the
drop and the irradiance Ed due to the drop itself:
τ
T
I d ( r ) = ∫ E d dt + ∫ E bg dt.
0
(2.13)
τ
If the motion of the background is slow, Ebg can be assumed to be constant over
exposure time T. Then, the above equation simplified to
I d = τ E d + (T − τ ) Ebg ,
Ed =
1
τ
τ
∫
0
E d dt ,
(2.14)
where, E d is the time-averaged irradiance due to the drop. The pixel that does not
observe a drop is denote as Ibg, where Ibg = EbgT. Thus, the change in intensity ΔI at
a pixel due to a drop is
ΔI = I d − I bg = τ ( E d − Ebg ).
(2.15)
The raindrops are much brighter than their backgrounds. Thus, E d > Ebg and ΔI is
positive. By substituting Ibg = EbgT in equation (2.15), relation between ΔI and Ibg is
ΔI = − βI bg + α , β =
τ
T
, α =τ Ed.
(2.16)
The time τ for which a drop remains within a pixel is a function of the
physical properties of the drop (size and velocity). It is constant and hence β are also
constant for all pixels within a streak. In addition, since the brightness of the
(stationary) droop is weakly affected by the background intensity, the average
irradiance E d can be assumed to be constant for pixels that lie on the same streak.
Thus, the change in intensities ΔI observed at all pixels along a streak are linearly
related to the background intensities Ibg occluded by the streak.
19
Approximate maximum value of τ is 1.18 ms, which is much less than the
typical exposure time T ≈ 30 ms of a video camera. As a result, the slope β is shown
to lie within the range 0 < β < 0.039. Based on these bounds, the following
observations can be made:
•
The time a drop stays at a pixel is less than the integration time of a
typical video camera. Thus, a drop produces a positive intensity change
( ΔI > 0) of unit frame width at a pixel as illustrated in Figure 2.8.
•
The change in intensities observed at all pixels along a rain streak are
linearly related to the background intensities Ibg occluded by the streak.
The slope β of this linear relation depends only on the physical properties
of the raindrop. This can be used to detect rain streaks.
Figure 2.8 Positive intensity change of unit frame width at a pixel.
2.2.3
Detection of Rain in Video
Based on the dynamics and photometric models of rain, a robust algorithm to
detect (segment) regions of rain in videos is developed. Although those models
explicitly do not take into account scene motions, but they provide strong constraints
which are sufficient to disambiguate rain from other forms of scene motions.
20
2.2.3.1 Photometrics Model Constraints
Consider a video of a scene captured in rain such as the one shown in Figure
2.9. The candidate pixels affected by rain in each frame of the video is detected using
the photometric model constraints derived in section 2.2.2.2. It was shown that a
drop produces a positive intensity fluctuation of unit frame duration. Hence, to find
candidate rain pixels in the nth frame, the only intensities need to be considered are
I n−1 , In and I n+1 at each pixel corresponding to the 3 frames n-1, n and n+1,
respectively (see Figure 2.8). If the background remains stationary in these three
frames, then the intensities I n−1 and I n+1 must be equal and the change in intensity
ΔI due to the raindrop in the nth frame must satisfy the constraint
ΔI = I n − I n−1 = I n − I n+1 ≥ c,
(2.17)
where c is a threshold that represents the minimum change in intensity due to a drop
that is detectable in the presence of noise. The result of applying this constraint with
c = 3 gray levels is shown in Figure 2.9(a). The selected pixels (white) include
almost all the pixels affected by rain.
In the presence of object motions in the scene, the above constraint also
detects several false positives. Some of the false positives can be seen in and around
the moving person in Figure 2.9(a). In order to reduce such false positive, the
photometric constraint in equation 2.16 is applied as followed.
•
The intensity change ΔI along a streak for each individual streak in
frame n is to be verified whether it is linearly related to the background
intensity I n−1 using equation 2.16.
•
The slope β of the linear fit is estimated.
•
Streaks that do not satisfy the linearity constraint, or whose slopes lie
outside the acceptable range of β ∈ [0 − 0.039] , are rejected.
21
Figure 2.9(b) shows a significant decrease in false positives after applying this
constraint. By applying these constraints to all the frames, an estimate of the binary
rain field b is obtained (see Figure 2.9(c)).
2.2.3.2 Dynamics Model Constraints
Although a significant reduction in false positives is achieved using the
photometric constraint, some false positives is remained. Therefore, the dynamics
constraint is applied to further reduce the false positives. In section 2.2.2.1, it was
shown that in a binary field produced by rain, strong temporal correlation exists
between neighboring pixels in the direction of rain. Using the estimated binary field
b, the zeroth order temporal correlation Rb of a pixel is computed with each of its
neighbors in a local (l × l ) neighborhood, over a set of frames {n, n − 1,..., n − f } .
Figure 2.9(d) shows the correlation values obtained for all (11× 11)
neighborhoods in frame n, computed using the previous f = 30 frames. Bright regions
indicate strong correlation. The direction and strength of correlation is computed for
each neighborhood center which is depicted in Figure 2.9(e) as a needle map. The
direction of the needle indicates the direction of correlation (direction of the rainfall)
ant its length denotes the strength of correlation (strength of the rainfall). The needle
map is kept sparse for clarity.
22
Figure 2.9 The rain detection algorithm applied to a video.
Weak and non-directional correlations occur at pixels with no rain and hence
are rejected. Thus, constraints of the photometric and dynamics models can be used
to effectively segment the scene into regions with and without rain, even in the
presence of complex scene motions.
2.2.4
Removal of Rain from Video
Once the video is segmented into rain and non-rain regions, the following
simple method is applied to remove rain from each frame of the video. The intensity
In for each pixel with rain in the nth frame is replaced with an estimate of the
background obtained as ( I n−1 + I n+1 ) / 2 (see Figure 2.8). This step removes most of
the rain in the frame. However, since drop velocities are high compared to the
exposure time of the camera, the same pixel may see different drops in consecutive
frames. Such cases are not accounted for by detection algorithm. Fortunately, the
probability of raindrops affecting a pixel in more than three consecutive frames is
negligible. In the case of a pixel being affected by raindrops in 2 or 3 consecutive
frames, rain is removed by assigning the average of intensities in the two
neighboring pixel (on either side) that are not effected by raindrops. This additional
step can be very effective for rain removal.
23
2.2.5
Summary
This work has developed a comprehensive model for the visual appearance of
rain. Base on this model, efficient algorithms for the detection and removal of rain
from videos is presented. Note that simple temporal filtering methods are not
effective in removing rain since they are spatially invariant and hence degrade the
quality of the image in regions without rain. In contrast, the method in this work
explicitly detects pixels affected by rain and removes the contribution of rain only
from those pixels, preserving the temporal frequencies due to object and camera
motions.
CHAPTER 3
METHODOLOGY
Processing time varying image sequences to preserve or enhance the visibility
of important parts of the image, which degraded by moderate raindrops effect can be
done using an algorithm which is capable to manipulate raindrops’ visual properties
in order to eliminate its effects. Developing an algorithm that will have the capability
to manipulate visual properties of raindrops involve components shown in Figure
3.1.
Input Material:
Scene with raindrops effects
Algorithm Development Process:
Raindrops Effects Removal
Output:
Scene without raindrops effect
Figure 3.1 Components of algorithm development.
25
This chapter will discuss the approach and methodology that is used to
accomplish objectives defines for this project. There are three components that are
involved as illustrated in Figure 3.1. The first component is the input material;
second component is the process of developing the algorithm; and the third
component is the output produced by the implementation of the algorithm developed.
These three components are interconnected with each other. The input
material needs to be processed to eliminate the raindrops effects. The algorithm
development process needs both input material and output produced in the
development process. Then the output needs input material so as to compare how
much the process has enhanced the visibility of the object of interest and eliminates
the raindrops effects. Next section will detail those three components.
3.1
Input and Output
As an input to the algorithm development process, the material used serves as
a sample or a specimen. The sample or the specimen used is confined within certain
scope. The scope of the project is to assure that the input material used for the
development process is the scene captured by an infrared sensitive surveillance
camera. This scene is an outdoor night scene with the appearance of moving object.
The moving object is the object of interest for motion tracking system. The
interference in the scene is caused by moderate rain condition.
The output resulting from the implementation of the algorithm developed is
important as it serves as a feedback signal to the development process. The output is
compared to its original input to find out how good the algorithm that has been
developed done its job. Then, any necessary improvement could be made to the
26
algorithm based on the observation and the analysis on the output produced as
comparison to its original input.
Figure 3.2 Input sample of algorithm development process.
Figure 3.2 shows a few frames that are used as the input sample to the
algorithm development process. The input sample shows a night scene which seems
to have the objects of interest and the scene is corrupted by moderate rain condition.
While Figure 3.3 shows the output sample of the algorithm development process.
They are the same scene shown in Figure 3.2. However, they have been clean up
from the effects of raindrops.
Figure 3.3 Output sample of algorithm development process.
27
3.2
Algorithm Development Process
Developing an algorithm for eliminating the effects of raindrops in image
sequence involves several procedures as shown in Figure 3.4.
Observation
Analysis
Algorithm
Experiment
Figure 3.4 Algorithm development process.
3.2.1
Observation
Using the sample scene provided (AVI video format), a few thousand frames
of image sequence were extracted from the scene for an approximate five minutes of
video times. Based on these frames, a thorough observation is done and the target is
to find any unique visual properties of raindrops effects that can be manipulated
further to reduce or eliminate the effects itself. From the observation, there are a few
28
visual properties that are unique to describe the raindrops effect in the image
sequence.
Those unique visual properties are as follows. First, raindrops appear in
image at a very high intensity. The intensity difference is large compared to the
background intensity which makes it obvious to the observer. Second, raindrops
appear in image at a very high velocity. As consequences, it tends to appear as a
white streak across the screen, and it also appears once in consecutive images
sequence. Third, raindrops that appear in the image are the drops that are closer to
the camera lens, while the drops farther away seem to be invisible.
3.2.2
Analysis
The three unique properties of the raindrops effects found earlier have the
potential to be manipulated further to eliminate the effects of raindrops in the image
sequence. However, an analysis needs to be done to those three properties, on
possible approach for image enhancement. It has been found out that these raindrops
also have the following characteristics.
Firstly, it is found that the average intensity of a frame that has raindrops
effect would be different from the average intensity of either the previous or the next
one. This has been measured with and without the appearance of other moving
objects in the images sequence. This exactly resembles equation 2.17 as follows
ΔI = I n − I n−1 = I n − I n+1 ≥ c,
(3.1)
29
where c is a threshold that represents the minimum change in intensity due to a drop
that is detectable in the presence of noise.
(n-1)th frame
nth frame
(n+1)th frame
Figure 3.5 Three consecutive frames of images sequence.
Secondly, it is found that no identical raindrop appears in consecutive frames.
It means that there is no raindrop which has the same shape with the same intensity
and appears at the same location throughout the entire images sequence. The
appearance of a particular raindrop only occurred once in consecutive frames.
Meaning that the raindrop that is absent in (n-1)th frame and then appears in nth frame
would disappear in the (n+1)th frame.
Finally, it is found that the object of interest appears in consecutive frames.
However, they moved to a slightly different location in every frame. This
phenomenon happened because of the finite exposure time of the camera. The object
of interest is not fast enough compared to the camera exposure time, so it appears in
every frame only at slightly different location. Raindrops are fast compared to the
camera exposure time, so in tend to appear only once in consecutive frames.
Once the analysis is done, an inference could be made that the difference of
intensity between a frame and its consecutive frames, either previous or next frame,
is the result of; firstly, the intensity of the raindrops that appear in that frame, and
secondly, the intensity difference caused by the changing in location of object of
30
interest. Therefore, manipulating these factors into an algorithm would be an
advantage to eliminate the raindrops effects.
3.2.3
Algorithm
An algorithm is developed using MATLAB® Image Processing Toolbox
which manipulates the result of the analysis on raindrops unique properties.
Start
Input: Scene with
raindrops effects
Frame Extraction
(1 , 2 , 3 …, (n-1)th, nth, (n+1)th,…, kth frame)
st
nd
rd,
Obtain Difference of Intensity
For (n-1) , nth, (n+1)th frame,
th
ΔI 1 = In - In-1
ΔI 2 = In - In+1
ΔI 1 and ΔI 2 are equal if there is no movement of
background object in the image sequence.
C
A
31
C
A
Obtain Artifact of Background Object
ΔI 12 = ΔI 1 - ΔI 2
ΔI 21 = ΔI 2 - ΔI 1
n=n+1
Obtain Artifact of Raindrops
ΔI 1(new) = ΔI 1 - ΔI 12
ΔI 2 (new) = ΔI 2 - ΔI 21
Now, ΔI 1(new) and ΔI 2(new) are equal.
Obtain the Output
In(new) = In - ΔI 1(new)
Output: Scene
without raindrops
effects
or
In(new) = In - ΔI 2(new)
End
Figure 3.6 Flowchart of the algorithm.
The algorithm illustrated in Figure 3.6 is an RGB image processing algorithm
which will process one frame at one execution. This process will continue
indefinitely whereby at the end of each loop, the term nth frame will be shifted to the
next frame, nth = (n+1)th, and this will continuously take place until n+1 equal to the
last number of frame of the video.
32
The algorithm begins with the frame extraction process which has a function
to convert the input AVI video format to a number of frames. This process is done
once throughout the whole processing.
Next step is to obtain the difference intensity between current frame, and its
adjacent frames which are the previous and the next frames. The symbols used for
current, previous and next frames are nth, (n-1)th and (n+1)th frames respectively. If
there is no movement of background object in the images sequence, the results of the
process, ΔI 1 and ΔI 2 are equal since the difference is only cause by the raindrops.
However, the possibility is very small due to the location of the surveillance camera
is on a tall building facing down a road and a parking space in front of the building.
Therefore, the results will contain the difference of intensity caused by the raindrops
and the movement of background object in the scene.
The third process is to obtain the artifact of background object. This is
because the movement of the object in the scene affects the difference of intensity
obtained from the previous process. This process will extract that artifact to be used
in the next process.
The fourth process is to obtain the artifact of raindrops. The second process
produced a difference of intensity containing both the artifact of raindrops along with
the artifact of background object. The third process has extracted the artifact of
background object from the result of second process. Therefore, in this process, the
artifact of raindrops can be extracted by subtracting the result of the third process
from the result of the second process.
Finally is the process to obtain the output. Output is obtained by subtracting
the artifact of raindrops extracted from previous process from the currently processed
frame. The result would be a new nth frame without raindrops in the image.
33
3.2.4
Experiment
Once the algorithm is developed, experiments are performed in order to make
sure whether any improvement is necessary to the algorithm. Result produced by the
implementation of the algorithm is observed and analyzed. A few changes have been
made to the algorithm based on the observation and the analysis done. This section
will summarize the observation and the analysis that have been done on the result
produced by the changes made on the algorithm, and the discussion also will go
through the changes that have been made to the algorithm itself.
Observation and the analysis are done by comparing the results obtained with
the original input. A few things that are observed include how clear the raindrops
effects have been eliminated from the scene, how good the object of interest has been
enhanced in the image, is there any unwanted artifact emerged within the image, and
has there any part of object of interest accidentally been deleted by the algorithm. All
these observations are done thoroughly before analysis on the possibility to improve
the performance is done.
From the observation and the analysis, three versions of algorithm have been
developed including the first version which originally developed from the beginning.
The other two versions were modified from the first version. The modifications
involved the process to obtain the difference of intensity. The frames use for each
processing are changed. Instead of using previous and next frame as (n-1)th and
(n+1)th, they are changed to (n-2)th and (n+2)th as the previous and next frame for the
second version algorithm. While for the third version algorithm, the frames use for
each processing are changed to (n-3)th and (n+3)th as the previous and next frame
instead of using previous and next frame as (n-1)th and (n+1)th. Result and analysis of
these three versions of algorithm will be discussed in detail in the next chapter.
CHAPTER 4
RESULT AND ANALYSIS
Three versions of algorithm have been developed, and each has its own
advantages and disadvantages. This chapter will present the results of the three
versions of algorithm, start with the result of algorithms processes, and then the
result of multiple raindrops visual conditions versus multiple versions algorithm, and
at the end is the analysis on performance and comparison between the three versions.
4.1
Results of Algorithms Processes
Step by step output produced by each process of the algorithm will be
presented in this section. The processes involved are as follows; first is to obtain the
difference of intensity, second is to obtain the artifact of background object, third is
to obtain the artifact of raindrops and fourth is to obtain the output. The scene which
is used as the input to all the algorithms is the same and the sample frames extracted
from the scene as the aid of this presentation are also at the same point. The features
that are highlighted in the sample frames are the presence of moving object including
the object of interest and the interference of raindrops.
35
1st Version Algorithm
4.1.1
Figure 4.1 shows the sample frames extracted from the scene as an input to
st
the 1 version algorithm. The frames that are used in each processing step of the
algorithm which acting as the current frame, previous frame and the next frame are
the nth frame, (n-1)th frame and the (n+1)th frame respectively.
(n-1)th
nth
(n+1)th
Figure 4.1 Sample input scene frames of 1st version algorithm.
The results produced by each level of processing are shown step by step as
follows:
•
The change of intensity is obtained.
ΔI1
ΔI2
Figure 4.2 The change of intensity, ΔI.
36
•
The artifact of background objects is obtained.
ΔI12
ΔI21
Figure 4.3 The artifact of background objects.
•
The artifact of raindrops is obtained.
Figure 4.4 The artifact of raindrop.
•
The output is obtained.
Original
Processed
Figure 4.5 The output of 1st version algorithm.
37
2nd Version Algorithm
4.1.2
Figure 4.6 shows the sample frames extracted from the scene as an input to
st
the 1 version algorithm. The frames that are used in each processing step of the
algorithm which acting as the current frame, previous frame and the next frame are
the nth frame, (n-2)th frame and the (n+2)th frame respectively.
(n-2)th
nth
(n+2)th
Figure 4.6 Sample input scene frames of 2nd version algorithm.
The results produced by each level of processing are shown step by step as
follows:
•
The change of intensity is obtained.
ΔI1
ΔI2
Figure 4.7 The change of intensity, ΔI.
38
•
The artifact of background objects is obtained.
ΔI12
ΔI21
Figure 4.8 The artifact of background objects.
•
The artifact of raindrops is obtained.
Figure 4.9 The artifact of raindrop.
•
The output is obtained.
Original
Processed
Figure 4.10 The output of 2nd version algorithm.
39
3rd Version Algorithm
4.1.3
Figure 4.11 shows the sample frames extracted from the scene as an input to
st
the 1 version algorithm. The frames that are used in each processing step of the
algorithm which acting as the current frame, previous frame and the next frame are
the nth frame, (n-3)th frame and the (n+3)th frame respectively.
(n-3)th
nth
(n+3)th
Figure 4.11 Sample input scene frames of 3rd version algorithm.
The results produced by each level of processing are shown step by step as
follows:
•
The change of intensity is obtained.
ΔI1
ΔI2
Figure 4.12 The change of intensity, ΔI.
40
•
The artifact of background objects is obtained.
ΔI12
ΔI21
Figure 4.13 The artifact of background objects.
•
The artifact of raindrops is obtained.
Figure 4.14 The artifact of raindrop.
•
The output is obtained.
Original
Processed
Figure 4.15 The output of 3rd version algorithm.
41
4.2
Results of Multiple Raindrops Visual Conditions
Three visual conditions of raindrops are experimented using the three
versions of algorithm developed. The objectives are to analyze the robustness and the
reliability of the algorithms, to compare the performance of the algorithms in various
raindrops visual conditions and to observe for any improvement necessary to the
algorithms as an attempt in the future works. This section will show the results of the
experiments in a form of comparison between algorithms.
4.2.1
Normal Spread Raindrops
This is a visual condition of raindrops which the drops visually appear to be
distributed in an even way in a frame and in a sequence of frames. There is no
overlapping between the drops in a sequence of frames, especially with the
consecutive frames. Sample frames of the raindrops are shown in Figure 4.16. The
results of processing and their intensity profiles are shown in Figure 4.19.
4.2.2
Overlapping Spread Raindrops
This is a visual condition of raindrops which the drops visually appear to be
distributed in an even way in a frame but there is a little overlapping occurred in a
sequence of frames, especially between consecutive frames. Sample frames of the
raindrops are shown in Figure 4.17. The results of processing and their intensity
profiles are shown in Figure 4.20.
42
4.2.3
Extreme Overlapping Raindrops
This is a condition of raindrops which the drops visually appear to be
distributed in an uneven way in a frame and they are severely overlapped in a
sequence of frames, especially between consecutive frames. Sample frames of the
raindrops are shown in Figure 4.18. The results of processing and their intensity
profiles are shown in Figure 4.21.
Figure 4.16 Sample frames of Normal Spread Raindrops condition.
Figure 4.17 Sample frames of Overlapping Spread Raindrops condition.
Figure 4.18 Sample frames of Extreme Overlapping Raindrops condition.
43
Intensity Profile of Original Image (row, y = 103)
Original Image
st
Intensity Profile of 1 Version Algorithm Result (row, y = 103)
Result of 1
nd
Intensity Profile of 2
Version Algorithm Result (row, y = 103)
st
Version Algorithm
nd
Result of 2 Version Algorithm
rd
Intensity Profile of 3 Version Algorithm Result (row, y = 103)
rd
Result of 3 Version Algorithm
Figure 4.19 Results and Intensity Profiles of Normal Spread Raindrops.
44
Intensity Profile of Original Image (row, y = 32)
Original Image
st
Intensity Profile of 1 Version Algorithm Result (row, y = 32)
Result of 1
nd
Intensity Profile of 2
Version Algorithm Result (row, y = 32)
st
Version Algorithm
nd
Result of 2 Version Algorithm
rd
Intensity Profile of 3 Version Algorithm Result (row, y = 32)
rd
Result of 3 Version Algorithm
Figure 4.20 Results and Intensity Profiles of Overlapping Spread Raindrops.
45
Intensity Profile of Original Image (row, y = 108)
Original Image
st
Intensity Profile of 1 Version Algorithm Result (row, y = 108)
Result of 1
nd
Intensity Profile of 2
Version Algorithm Result (row, y = 108)
st
Version Algorithm
nd
Result of 2 Version Algorithm
rd
Intensity Profile of 3 Version Algorithm Result (row, y = 108)
rd
Result of 3 Version Algorithm
Figure 4.21 Results and Intensity Profiles of Extreme Overlapping Raindrops.
46
4.3
Analysis and Comparison
The analysis and comparison that will be discussed in detail throughout this
section encompass the analysis and comparison on the results of algorithms
processes, and the analysis and comparison on the results of multiple raindrops visual
condition versus multiple versions algorithm.
Results of algorithms processes are purposely presented at the beginning of
the chapter to show the different approach used in each version of algorithm and to
show the output resulting from each approach. The flow of the three versions of
algorithm is still the same between each other since they are originally from the same
algorithm. The only different is in the way the three input frames are extracted from
the scene. The 1st version algorithm extracted current frame along with its
consecutive previous and next frames. While, the 2nd version algorithm extracted
current frame, but the previous and next frames are not consecutive to it, they are one
frame away to the previous and one frame away to the next. The same happen to the
3rd version algorithm, which current frame is extracted, but the previous frame is two
frames away to the previous and the next frame is two frames away to the next.
Results produced at each level of process show the different as they are
expected to show. The difference of intensity obtained by 1st version algorithm seems
to have less artifact of background objects, while 2nd version algorithm seems to have
more, and the 3rd version algorithm seems to have the most artifact of background
objects. As expected, the further the previous and the next frames away from the
current frame, the more the artifact of background objects will be. This has been
proved from the comparison between the results of the three versions algorithm. The
result of the 3rd version algorithm proved to have the most losses of background
objects along with the elimination of the raindrops effects. Unwanted artifacts also
emerged around the moving background objects within the result in a form of ghost
layers. The 3rd version algorithm gives a clear scene free from raindrops effects by
having in return some drawback to the object of interest.
47
Multiple raindrops visual conditions are experimented on multiple versions
algorithm to have them tested over various raindrops visual conditions. The visual
conditions of the raindrops which are the normal spread raindrops, the overlapping
spread raindrops and the extreme overlapping raindrops are ranked as easy, moderate
and difficult respectively. All the three algorithms have gone through a few
implementations which all the raindrops visual conditions are their inputs. Results of
the implementations show a few interesting performances of the three versions
algorithm. They have their own advantages and disadvantages.
First is the easy raindrops visual condition, which is the normal spread
raindrops act as the input to the algorithms. All three versions successfully
eliminated the raindrops from the scene. Refer to Figure 4.19. However, as discussed
before, there are a few losses occurred within the result and it is involving the object
of interest. The visibility of losses are observed to be the most using the 3rd version
algorithm, while the 2nd version algorithm is lesser, and the 1st version seems to have
no losses at all.
Second is the moderate raindrops visual condition, which is the overlapping
spread raindrops act as the input to the algorithms. The results produced by the three
versions algorithm show some interesting performance. Refer to Figure 4.20. It is
obvious that the 1st version algorithm has poorest performance among the three. The
visually overlapping raindrops effects that occurred consecutively to the currently
processed frame have limited the 1st version algorithm ability to remove the effects.
However, the 2nd version and 3rd version algorithm are able get around this
overlapping problem by having the raindrops effect eliminated from the scene, but
with something in return. They still bring the ghost layers with them into the results.
If these drawbacks are tolerable, then 2nd version and 3rd version obviously did a
good work in removing raindrops effects.
Finally is the difficult raindrops visual condition, which is the extreme
overlapping raindrops act as the input to the algorithms. The results produced by the
48
three versions algorithm show another surprising performance. Refer to Figure 4.21.
The results show that the 1st version and the 2nd version of the algorithm failed to
clean up the scene from the effects of raindrops. Their ability to remove the raindrops
effects are limited by the severe overlapping raindrops effects which occurred in a
few consecutive frames. However, the 3rd version of the algorithm has successfully
eliminated the raindrops effects. However, the ghost layers are still present.
CHAPTER 5
CONCLUSIONS AND FUTURE WORK
5.1
Conclusion
This project has attempted to develop an algorithm based on a sample input,
which is a night scene with the appearance of moving objects and interfered by
moderate rain condition. The scene has been observed and analyzed thoroughly at the
frame level searching for unique properties of raindrops, which afterward has been
manipulated into an algorithm as a mechanism for raindrops effects removal.
Experiments have been done on the developed algorithm as an effort to discover any
possibility of improvement for the algorithm. Then finally, three versions of
algorithm have been successfully developed. Each version of the algorithms has its
own advantages and disadvantages. They differ only by the approach which the input
frames are being extracted. This different would allow them to perform in a various
visual condition of raindrops. This attribute make them robust and reliable to a
certain extent. The results produced as a comparison to their original inputs have
shown significant raindrops effects elimination
50
5.2
Future Work
Raindrops effects are significantly removed from the scene, but it is done
with some unwanted things emerged into the result in a form of ghost layers. Future
work should attempt to get around this problem as good as possible. Developing an
algorithm based on a single sample scene would have narrowed the possibility of
finding more interesting properties of raindrops that could be manipulated back
against them. So, it is proposed to develop an algorithm based on more sample scene
with various rain conditions to make the algorithm more robust and more reliable.
Finally, it is a good attempt to integrate the algorithm into a real-time processing
device so it could be commercialized worldwide.
51
REFERENCES
D.S. Kalivas and A.A. Sawchuk, (1990). Motion Compensated Enhancement of
Noisy Image Sequences. IEEE Transaction.
Gert Schoonenberg, (2005). Adaptive Spatial-temporal Filtering Applied to X-ray
Fluoroscopy Angiography. Society of Photo-Optical Instrumentation Engineers.
K. Garg and S.K. Nayar, (2004). Detection and Removal of Rain from Videos. IEEE
Computer Society Conference on Computer Vision and Pattern Recognition
(CVPR04).
K. Garg and S.K. Nayar, (2005). When Does a Camera See Rain? IEEE International
Conference on Computer Vision (ICCV05).
L. Joyeux, O. Buisson, B. Besserer and S. Boukir, (1999). Detection and Removal of
Line Scratches in Motion Picture Films. IEEE Transaction.
R.C. Gonzalez and R.E. Woods, (2002). Digital Image Processing Second Edition.
Prentice Hall.
R.C. Gonzalez, R.E. Woods and S.L. Eddings, (2002). Digital Image Processing
Using MATLAB. Prentice Hall.
R.D. Morris and W.J. Fitzgerald, (1994). Replacement Noise in Image Sequences –
Detection and Interpolation by Motion Field Segmentation. IEEE Transaction.
S. Starik and M. Werman, (2002). Simulation of Rain in Videos. Texture Workshop,
ICCV02.
© Copyright 2026 Paperzz