Image Enhancement and Filtering

EE4H, M.Sc 0407191
Computer Vision
Dr. Mike Spann
[email protected]
http://www.eee.bham.ac.uk/spannm
Introduction
 Images may suffer from the following degradations:
 Poor contrast due to poor illumination or finite
sensitivity of the imaging device
 Electronic sensor noise or atmospheric disturbances
leading to broad band noise
 Aliasing effects due to inadequate sampling
 Finite aperture effects or motion leading to spatial
Introduction
 We will consider simple algorithms for image
enhancement based on lookup tables
 Contrast enhancement
 We will also consider simple linear filtering
algorithms
 Noise removal
Histogram equalisation
 In an image of low contrast, the image has grey
levels concentrated in a narrow band
 Define the grey level histogram of an image h(i)
where :
 h(i)=number of pixels with grey level = i
 For a low contrast image, the histogram will be
concentrated in a narrow band
 The full greylevel dynamic range is not used
Histogram equalisation
h( i )
i
Histogram equalisation
 Can use a sigmoid lookup to map input to output grey
levels
 A sigmoid function g(i) controls the mapping from
input to output pixel
 Can easily be implemented in hardware for maximum
efficiency
Histogram equalisation
h( i )
g( i )
h' ( i )  h( g 1 ( i ))
h' ( i )
1
g( i ) 
1  exp    i  
Histogram equalisation
 θ controls the position of maximum slope
 λ controls the slope
 Problem - we need to determine the optimum
sigmoid parameters and for each image
 A better method would be to determine the best
mapping function from the image data
Histogram equalisation
 A general histogram stretching algorithm is defined in
terms of a transormation g(i)
 We require a transformation g(i) such that from any
histogram h(i) :
h' (i ) 
 h( j )  constant
j:i  g ( j )
Histogram equalisation
 Constraints (N x N x 8 bit image)
 No ‘crossover’ in grey levels after transformation
 h' (i)  N
2
i
i1  i2  g( i1 )  g( i2 )
Histogram equalisation
 An adaptive histogram equalisation algorithm can be
defined in terms of the ‘cumulative histogram’ H(i) :
H ( i ) = number of pixels with grey levels  i
i
H (i)   h( j )
j 0
Histogram equalisation
 Since the required h(i) is flat, the required H(i) is a
ramp:
h(i)
H(i)
Histogram equalisation
 Let the actual histogram
histogram be h(i) and H(i)
and
cumulative
 Let the desired histogram and desired cumulative
histogram be h’(i) and H’(i)
 Let the transformation be g(i)
2
N g( i )
H' ( g( i )) 
255
( H' ( 255 )  N , H' ( 0 )  0 )
2
Histogram equalisation
 Since g(i) is an ‘ordered’ transformation
i1  i2  g( i1 )  g( i2 )
2
N g( i )
H' ( g( i ))  H ( i ) 
255
255H ( i )
g( i ) 
2
N
Histogram equalisation
 Worked example, 32 x 32 bit image with grey levels
quantised to 3 bits
7 H( i )
g( i ) 
1024
h' ( i ) 
 h( j )
j :i  g ( j )
Histogram equalisation
i
h( i )
H( i )
0
197
197
1.351
-
1
256
453
3.103
197
2
212
665
4.555
-
3
164
829
5.676
256
4
82
911
6.236
-
5
62
993
6.657
212
6
31
1004
6.867
246
7
20
1024
7.07
113
g( i )
h' ( i )
Histogram equalisation
300
250
200
Original histogram
150
Stretched histogram
100
50
0
0
1
2
3
4
5
6
7
Histogram equalisation
h(i)
h(i)
2000.00
2000.00
1500.00
1500.00
1000.00
1000.00
500.00
500.00
0.00
0.00
i
50.00
100.00
150.00
200.00
250.00
0.00
0.00
i
50.00
100.00
150.00
200.00
250.00
Histogram equalisation
h(i)
h(i)
3000.00
3000.00
2500.00
2500.00
2000.00
2000.00
1500.00
1500.00
1000.00
1000.00
500.00
500.00
0.00
0.00
i
50.00
100.00
150.00
200.00
250.00
0.00
0.00
i
50.00
100.00
150.00
200.00
250.00
Histogram equalisation
 ImageJ demonstration
 http://rsb.info.nih.gov/ij/signed-applet
Image Filtering
 Simple image operators can be classified as 'pointwise'
or 'neighbourhood' (filtering) operators
 Histogram equalisation is a pointwise operation
 More general filtering operations use neighbourhoods
of pixels
Image Filtering
Output image
Input image
(x,y)
(x,y)
pointwise
transformation
Output image
Input image
(x,y)
(x,y)
neighbourhood
transformation
Image Filtering
 The output g(x,y) can be a linear or non-linear
function of the set of input pixel grey levels {f(x-M,yM)…f(x+M,y+M}.
Input image f(x,y)
Output image g(x,y)
(x-1,y-1)
(x,y)
(x+1,y+1)
(x,y)
Image Filtering
 Examples of filters:
g( x , y )  h1 f ( x  1, y  1)  h2 f ( x , y  1)
 .....h9 f ( x  1, y  1)
 f ( x  1, y  1 ), f ( x , y  1 )
g( x , y )  median

 ..... f ( x  1, y  1 )

Linear filtering and convolution
 Example
 3x3 arithmetic mean of an input image (ignoring
floating point byte rounding)
Input image f(x,y)
Output image g(x,y)
(x-1,y-1)
(x,y)
(x+1,y+1)
(x,y)
Linear filtering and convolution
 Convolution involves ‘overlap – multiply – add’ with
‘convolution mask’
1

 91
H
9
1

9
1
9
1
9
1
9
1

9
1
9
1

9
Linear filtering and convolution
Input image f(x,y)
(x,y)
Im age point
Filter m ask point
Output image g(x,y)

(x,y)
Linear filtering and convolution
 We can define the convolution operator
mathematically
 Defines a 2D convolution of an image f(x,y) with a filter
h(x,y)
1
1
g( x , y )    h( x' , y' ) f ( x  x' , y  y' )
x'  1 y'  1
1 1 1
   f ( x  x' , y  y' )
9 x'   1 y'   1
Linear filtering and convolution
 Example – convolution with a Gaussian filter kernel
 σ determines the width of the filter and hence the
amount of smoothing
( x2  y2 )
g( x , y )  exp( 
)
2
2
 g( x ) g( y )
2
x
g( x )  exp(  2 )
2
Linear filtering and convolution
g(x)
1.00
0.80
0.60
σ
0.40
0.20
0.00
-6
-4
-2
0
2
4
x
Linear filtering and convolution
Original
Filtered
σ=1.5
Noisy
Filtered
σ=3.0
Linear filtering and convolution
 ImageJ demonstration
 http://rsb.info.nih.gov/ij/signed-applet
Linear filtering and convolution
 We can also convolution to be a frequency domain
operation
 Based on the discrete Fourier transform F(u,v) of the
image f(x,y)
2j
F ( u,v )    f ( x , y )exp( 
( ux  vy ))
x0 y0
N
N 1 N  1
u,v  0.. N  1
Linear filtering and convolution
 The inverse DFT is defined by
1 N 1 N 1
2j
f ( x , y )  2   F ( u,v )exp(
( ux  vy ))
N x0 y0
N
x , y  0.. N  1
Linear filtering and convolution
x
(0.0)
f(x,y)
(N-1,N-1)
y
DFT
IDFT
u
(0,0)
F(u,v)
(N-1,N-1)
v
Linear filtering and convolution
log( 1 F( u, v ) )
Linear filtering and convolution
 F(u,v) is the frequency content of the image at spatial
frequency position (u,v)
 Smooth regions of the image contribute low frequency
components to F(u,v)
 Abrupt transitions in grey level (lines and edges)
contribute high frequency components to F(u,v)
Linear filtering and convolution
 We can compute the DFT directly using the formula
2
 An N point DFT would require N floating point
multiplications per output point
2
 Since there are N output points , the computational
4
complexity of the DFT is N
4
9
 N =4x10 for N=256
 Bad news! Many hours on a workstation
Linear filtering and convolution
 The FFT algorithm was developed in the 60’s for
seismic exploration
2
 Reduced the DFT complexity to 2N log2N
2
6
 2N log2N~10 for N=256
 A few seconds on a workstation
Linear filtering and convolution
 The ‘filtering’ interpretation of convolution can be
understood in terms of the convolution theorem
 The convolution of an image f(x,y) with a filter h(x,y) is
defined as:
M 1 M 1
g( x , y )    h( x' , y' ) f ( x  x' , y  y' )
x'  0 y'  0
 f ( x , y )* h( x , y )
Linear filtering and convolution
Input image f(x,y)
(x,y)
Filter mask h(x,y)
Output image g(x,y)

(x,y)
Linear filtering and convolution
 Note that the filter mask is shifted and inverted prior
to the ‘overlap multiply and add’ stage of the
convolution
 Define the DFT’s of f(x,y),h(x,y), and g(x,y) as
F(u,v),H(u,v) and G(u,v)
 The convolution theorem states simply that :
G( u,v )  H( u,v )F ( u,v )
Linear filtering and convolution
 As an example, suppose h(x,y) corresponds to a linear
filter with frequency response defined as follows:
H( u , v )  0 for u  v  R
2
2
 1 otherwise
 Removes low frequency components of the image
Linear filtering and convolution
DFT
IDFT
Linear filtering and convolution
 Frequency domain implementation of
convolution
Image f(x,y) N x N pixels
Filter h(x,y) M x M filter mask points
Usually M<<N
In this case the filter mask is 'zero-padded' out to N
xN
 The output image g(x,y) is of size N+M-1 x N+M-1
pixels. The filter mask ‘wraps around’ truncating
g(x,y) to an N x N image




Linear filtering and convolution
Filter mask h(x,y)
zero padding
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
Input image f(x,y)
DFT
DFT
H(u,v)
F(u,v)
f(x,y) * h(x,y)
H(u,v)F(u,v)
IDFT
Linear filtering and convolution
Input image f(x,y)
Output image g(x,y)
(x,y)
(x',y')
x' = x modulo N
Filter mask h(x,y)
y' = y modulo N
Linear filtering and convolution
 We can evaluate the computational complexity of
implementing convolution in the spatial and
spatial frequency domains
 N x N image is to be convolved with an M x M
filter
2
 Spatial domain convolution requires M floating point
multiplications per output point or N 2 M 2 in total
 Frequency domain implementation requires 3x(2N 2
log 2 N) + N 2 floating point multiplications ( 2 DFTs +
1 IDFT + N 2 multiplications of the DFTs)
Linear filtering and convolution
 Example 1, N=512, M=7
 Spatial domain implementation requires 1.3 x 107
floating point multiplications
 Frequency domain implementation requires 1.4 x 107
floating point multiplications
 Example 2, N=512, M=32
 Spatial domain implementation requires 2.7 x 108
floating point multiplications
 Frequency domain implementation requires 1.4 x 107
floating point multiplications
Linear filtering and convolution
 For smaller mask sizes, spatial and frequency
domain implementations have about the same
computational complexity
 However, we can speed up frequency domain
interpretations by tessellating the image into subblocks and filtering these independently
 Not quite that simple – we need to overlap the filtered
sub-blocks to remove blocking artefacts
 Overlap and add algorithm
Linear filtering and convolution
 We can look at some examples of linear filters
commonly used in image processing and their
frequency responses
 In particular we will look at a smoothing filter and a
filter to perform edge detection
Linear filtering and convolution
 Smoothing (low pass) filter
 Simple arithmetic averaging
 Useful for smoothing images corrupted by additive
broad band noise
 1 1 1

1
H 3   1 1 1
9

 1 1 1
1

1
1 
1
H5 
25 
1

1
1 1 1 1

1 1 1 1
1 1 1 1

1 1 1 1

1 1 1 1
etc
Linear filtering and convolution
h( x )
H( u )
x
Spatial domain
u
Spatial frequency domain
Linear filtering and convolution
 Edge detection filter
 Simple differencing filter used for enhancing edged
 Has a bandpass frequency response
 1 0  1


H   1 0  1


 1 0  1
Linear filtering and convolution
 ImageJ demonstration
 http://rsb.info.nih.gov/ij/signed-applet
Linear filtering and convolution
f (x)
p
x
f ( x )* ( 1 0 1 )
p
x
Linear filtering and convolution
 We can evaluate the (1D) frequency response of
the filter h(x)={1,0,-1 } from the DFT definition
2 jux
H ( u )   h( x )exp( 
)
x0
N
4ju
 1  exp( 
)
N
2 ju 
2 ju
2 ju 
 exp( 
) exp(
)  exp( 
)

N
N
N 
2 ju
2 u
 2 j exp( 
) sin(
)
N
N
N 1
Linear filtering and convolution
 The magnitude of the response is therefore:
2u
H ( u )  sin(
)
N
 This has a bandpass characteristic
Linear filtering and convolution
H( u )
u
Conclusion
 We have looked at basic (low level) image processing
operations
 Enhancement
 Filtering
 These are usually important pre-processing steps
carried out in computer vision systems (often in
hardware)