Removing Shadows From Images

Removing Shadows From Images
G. D. Finlayson1, S.D. Hordley1
& M.S. Drew2
1School
of Information Systems,
University of East Anglia, UK
2School
of Computer Science,
Simon Fraser University, Canada
ECCV 2002
Overview
Introduction
Shadow Free Grey-scale images
- Illuminant Invariance at a pixel
Shadow Free Colour Images
- Removing shadow edges using illumination invariance
- Re-integrating edge maps
Results and Future Work
ECCV 2002
The Aim: Shadow Removal
We would like to go from a colour image with shadows, to
the same colour image, but without the shadows.
ECCV 2002
Why Shadow Removal?
For Computer Vision
- improved object tracking, segmentation etc.
For Image Enhancement
- creating a more pleasing image
For Scene Re-lighting
- to change for example, the lighting direction
ECCV 2002
What is a shadow?
Region Lit by
Sunlight and
Sky-light
Region Lit by
Sky-light only
A shadow is a local change in illumination intensity and
(often) illumination colour.
ECCV 2002
Removing Shadows
So, if we can factor out the illumination locally (at a pixel) it
should follow that we remove the shadows.
So, can we factor out illumination locally? That is, can we derive
an illumination-invariant colour representation at a single image
pixel?
Yes, provided that our camera and illumination satisfies certain
restrictions ….
ECCV 2002
Conditions for Illumination Invariance
(1) If sensors can be represented as delta functions
(they respond only at a single wavelength)
(2) and illumination is restricted to the Planckian locus
(3) then we can find a 1-D co-ordinate, a function of
image chromaticities, which is invariant to illuminant
colour and intensity
(4) this gives us a grey-scale representation of our
original image, but without the shadows
(it takes us a third of the way to the goal of this talk!)
ECCV 2002
Image Formation
E ( )
E ( ) S ( )
S ( )
r   R (  ) E (  ) S (  ) d
Camera responses
depend on 3 factors:
light (E), surface (S), and
sensor (R, G, B)
g   G (  ) E (  ) S (  ) d
b   B (  ) E (  ) S (  ) d
ECCV 2002
Using Delta Function Sensitivities
B()
G()
R()
=
R      R 
G      G 
B      B 

     S  E d  E S  
R
Delta functions
“select” single
wavelengths:
R
R
r  ER S R 
g  E G S G 
b  E B S B 
ECCV 2002
Characterising Typical Illuminants
Most typical illuminants
lie on, or close to, the
Planckian locus (the red
line in the figure)
1
Illuminant Chromaticities
0.9
0.8
g/(r+g+b)
0.7
0.6
0.5
0.4
So, let’s represent
illuminants by their
equivalent Planckian
black-body illuminants ...
0.3
0.2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
r/(r+g+b)
ECCV 2002
Planckian Black-body Radiators

5
E ( )  Ic1  e

But, for typical
illuminants, c2>>T.
So, Planck’s eqn.
is approximated as:
c2
T

 1

1
Here I controls the overall
intensity of light, T is the
temperature, and c1, c2 are
constants
c2

5
T
E ( )  Ic1 e
ECCV 2002
How good is this approximation?
2500 Kelvin
5500 Kelvin
10000 Kelvin
ECCV 2002
Back to the image formation equation
For, delta function sensors and Planckian Illumination we have:
c2

 5  RT
1 R
r  S ( R )  I  c  e
Surface
Light
Or, taking the log of both sides ...
c2
ln r   ln I  ln S (R )c   
 RT
5
1 R
ECCV 2002
Summarising for the three sensors
k   s 
a 
1 




ln g  k     s   b 
T
 c 
ln b k    s 
ln r
Constant independent
of sensor
Variable dependent
only on reflectance
Where subscript s
denotes dependence on
reflectance and k,a,b
and c are constants. T is
temperature.
Variable dependent
on illuminant
ECCV 2002
Factoring out the illumination
First, let’s calculate log-opponent chromaticities:
r '  ln r  ln g   s   s  1 a  b

 




b'  ln b  ln g   s   s  T  c  b 
Then, with some algebra, we have:
( a  b)
r '
b'  f ( s ,  s ,  s )
(c  b )
That is: there exists a weighted difference of log-opponent
chromaticities that depends only on surface reflectance
ECCV 2002
An example - delta function sensitivities
Narrow-band
(delta-function
sensitivities)
1
B
0.5
-0.5
log(b/g)
1
0.9
-1.5
G
-2
0.7
Y
-2.5
0.6
R
W
-1
0.8
Relative Sensitivity
P
0
0.5
-3
0.4
-3.5
-0.5
0
0.5
1
1.5
2
2.5
3
log(r/g)
0.3
0.2
0.1
0
400
450
500
550
Wavelength
600
650
700
Log-opponent
chromaticities
for 6 surfaces
under 9 lights
ECCV 2002
Deriving the Illuminant Invariant
Log-opponent
chromaticities
for 6 surfaces
under 9 lights
1
0.5
0
Rotate
chromaticities
log(b/g)
-0.5
-1
-1.5
-2
-2.5
-3
-3.5
-0.5
0
0.5
1
1.5
log(r/g)
2
2.5
3
This axis is invariant to
illuminant colour
ECCV 2002
A real example with real camera data
Normalized sensitivities of
a SONY DXC-930 video
camera
1
0.9
Relative Sensitivity
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
400
450
500
550
Wavelength
600
650
700
Log-opponent chromaticities
for 6 surfaces under 9
different lights
ECCV 2002
Deriving the invariant
Log-opponent chromaticities
for 6 surfaces under 9
different lights
Rotate
chromaticities
The invariant axis is now only
approximately illuminant invariant
(but hopefully good enough)
ECCV 2002
Some Examples
ECCV 2002
A Summary So Far
With certain restrictions, from a 3-band colour
image we can derive a 1-d grey-scale image
which is:
- illuminant invariant
- and so, shadow free
ECCV 2002
What’s left to do?
To complete our goal we would like to go back to a 3band colour image, without shadows
We will look next at how the invariant
representation can help us to do this ...
ECCV 2002
Looking at edge information
Consider an edge
map of the colour
image ...
And an edge map of
the 1-d invariant
image ...
These are approximately the same, except
that the invariant edge map has no shadow
edges
ECCV 2002
Removing Shadow Edges
From these two edge maps we can remove
shadow edges thus:
Edges = Iorig & Iinv
(Valid edges are in the original image, and in the
invariant image)
ECCV 2002
Using Shadow Edges
So, now we have the edge map
of the image we would like to
obtain
(edge map of the original image
with shadows edges set to zero)
So, can we go from this edge
information back to the image
we want?
(can we re-integrate the edge
information?).
ECCV 2002
Re-integrating Edge Information
Red
Green
Of course, re-integrating a single edge map will
give us a grey-scale image:
So, we must apply any procedure to each band of
the colour image separately:
Blue
Re-integrated
Original
Colour
Channels
Edge Maps
of Channels
Shadow Edges
Removed
ECCV 2002
Re-Integrating Edge Information
The re-integration problem has been studied by a
number of researchers:
- Horn
- Blake et al
- Weiss ICCV ‘01 (Least-Squares)
- ...
- Land et al (Retinex)
The aim is typically to derive a reflectance image from an
image in which illumination and reflectance are confounded.
ECCV 2002
Weiss’ Method
Weiss used a sequence of time varying images of a fixed
scene to determine the reflectance edges of the scene
His method works by determining, from the image sequence,
edges which correspond to a change in reflectance
(Weiss’ definition of a reflectance edge is an edge which persists
throughout the sequence)
Given reflectance edges, Weiss re-integrates the
information to derive a reflectance image
In our case, we can borrow Weiss’ re-integration procedure
to recover our shadow-free image.
ECCV 2002
Re-integrating Edge Information
Let Ij(x,y) represent the log of a single band of a colour image
We first
calculate:
x is the derivative
operator in the x
direction
I j  T X  TY I j
T is the operator
that sets shadow
edges to zero
y is the derivative
operator in the y
direction
This summarises the process of detecting and removing shadow
edges
ECCV 2002
Re-integrating Edge Information
To recover the shadow free, image we want to invert this
Equation
I j  T X  TY I j
To do this, we first form the Poisson Equation
I j   X T X  Y TY I j
We solve this (subject to Neumann boundary conditions) as
follows:
ECCV 2002
Re-integrating Edge Information
I j   X T X  Y TY I j
We solve by applying
the inverse Laplacian
Note: the inverse
operator has no
Threshold
I j   X  X  Y Y  I j
rec
1
Applying this process to each of the three channels recovers a
log image without shadows.
ECCV 2002
A Summary of Re-integration
1. Iorig = Original colour image, Iinv = Invariant image
2. For j=1,2,3 Ijorig = jth band of Iorig
3. Remove Shadow Edges: Edges = Ijorig & Iinv
4. Differentiate the thresholded edge map
5. Re-integrate the image
6. Goto 3
ECCV 2002
Some Remarks
The re-integration step is unique up to an additive constant (a
multiplicative constant in linear image space
Fixing this constant amounts to applying a correction for
illumination colour to the image. Thus we choose suitable
constants to correct for the prevailing scene illuminant
In practice, the method relies upon having an effective
thresholding step T, that is, on effectively locating the shadow
edges.
As we will see, our shadow edge detection is not yet perfect
ECCV 2002
Shadow Edge Detection
The Shadow Edge Detection consists of the
following steps:
1. Edge detect a smoothed version of the original (by channel) and
the invariant images
Canny or
SUSAN
2. Threshold to keep strong edges in both images
3. Shadow Edge = Edge in Original & NOT in Invariant
4. Applying a suitable Morphological filter to thicken the edges
resulting from step 3.
This typically identifies the shadow edges plus some
false edges
ECCV 2002
An Example
Original
Image
Invariant
Image
Detected
Shadow
Edges
Shadow
Removed
ECCV 2002
A Second Example
Original
Image
Invariant
Image
Detected
Shadow
Edges
Shadow
Removed
ECCV 2002
More Examples
Original
Image
Invariant
Image
Detected
Shadow
Edges
Shadow
Removed
ECCV 2002
More Examples
Original
Image
Invariant
Image
Detected
Shadow
Edges
Shadow
Removed
ECCV 2002
A Summary
We have presented a method for removing shadows from images
The method uses an illuminant invariant 1-d image representation
to identify shadow edges
From the shadow free edge map we re-integrate to recover a
shadow free colour image
Initial results are encouraging: we are able to remove shadows,
even when shadow edge definition is not perfect
ECCV 2002
Future Work
We are currently investigating ways to more reliably identify
shadow edges ...
… or to derive a re-integration which is more robust to errors
(Retinex?)
Currently deriving the illuminant invariant image requires some
knowledge of the capture device’s characteristics
- We show in the paper how to determine these characteristics
empirically and we are working on making this process more robust
ECCV 2002
Acknowledgements
The authors would like to thank Hewlett-Packard
Incorporated for their support of this work.
ECCV 2002