Deblurring Saturated Night Image With Function-Form

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 24, NO. 11, NOVEMBER 2015
4637
Deblurring Saturated Night Image With
Function-Form Kernel
Haifeng Liu, Xiaoyan Sun, Senior Member, IEEE, Lu Fang, and Feng Wu, Fellow, IEEE
Abstract— Deblurring saturated night images are a challenging
problem because such images have low contrast combined with
heavy noise and saturated regions. Unlike the deblurring schemes
that discard saturated regions when estimating blur kernels, this
paper proposes a novel scheme to deduce blur kernels from
saturated regions via a novel kernel representation and advanced
algorithms. Our key technical contribution is the proposed
function-form representation of blur kernels, which regularizes
existing matrix-form kernels using three functional components:
1) trajectory; 2) intensity; and 3) expansion. From automatically
detected saturated regions, their skeleton, brightness, and width
are fitted into the corresponding three functional components
of blur kernels. Such regularization significantly improves the
quality of kernels deduced from saturated regions. Second, we
propose an energy minimizing algorithm to select and assign the
deduced function-form kernels to partitioned image regions as
the initialization for non-uniform deblurring. Finally, we convert
the assigned function-form kernels into matrix form for more
detailed estimation in a multi-scale deconvolution. Experimental
results show that our scheme outperforms existing schemes on
challenging real examples.
Index
Terms— Image
deblurring,
function-form
representation, night images, saturation regions.
I. I NTRODUCTION
D
ESPITE the continuing evolution of advanced sensors,
auto-focus and anti-shake technologies, photos taken at
night are often blurry. A slight camera shake can induce
annoying blur effects since low-speed shutters and long
exposures are required under dim lighting conditions. As a
result, night image deblurring is in significant demand and an
important asset for photography. because
Image deblurring has been extensively studied in the
past decades and has achieved satisfactory results when
dealing with blurry images with salient structures [1]–[4].
Manuscript received November 7, 2014; revised April 30, 2015 and
July 1, 2015; accepted July 10, 2015. Date of publication July 28, 2015;
date of current version August 31, 2015. This work was supported in part
by the Distinguished Young Scholars Program under Grant 61425026 and in
part by the Natural Science Foundation of China under Contract 61303151
and Contract 61390514. (Corresponding author: Xiaoyan Sun.)
H. Liu was with Microsoft Research Asia, Beijing 100080, China. He is
now with the University of Science and Technology of China, Hefei 230027,
China (e-mail: [email protected]).
X. Sun is with Microsoft Research Asia, Beijing 100080, China (e-mail:
[email protected]).
L. Fang and F. Wu are with the University of Science and
Technology of China, Hefei 230027, China (e-mail: [email protected];
[email protected]).
This paper has supplementary downloadable material available at
http://ieeexplore.ieee.org., provided by the author.
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TIP.2015.2461445
Fig. 1. Blurry image and the deblurred results. (a) Blurry image, (b) blurry
region, (c) result of Fergus et al. [1], (d) result of Whyte et al. [9], (e) result
of Hu et al. [13], and (e) our proposed result.
However, these schemes often fail to deblur night images, as
shown in Fig. 1(c). This is saturated pixels in night images
break the linear convolution formulation that most schemes
assume. In addition, night images usually exhibit low contrast,
which hinders kernel estimation as it highly relies on salient
image structures [5], [6], and the low signal-to-noise ratio
in night images further increases the difficulty of kernel
estimation [7].
Recently, saturated pixels in blurry night images have
attracted attention for different reasons in deblurring. Some
schemes treated saturated pixels as outliers and exclude them
in non-blind deconvolution [8], [9] or multi-frame blind deconvolution [10]. Others propose making use of the light streaks
or specular highlights to deduce kernels in blind deconvolution
interactively [11] or automatically [12], [13]. However, none
of the previous schemes explores saturated pixels in the
context of a physically-motivated kernel representation and
non-uniform deblurring.
As shown in Fig. 1(b), we observe that saturated pixels,
rather than being a problem, actually provide clear information
on camera shake, including the camera shake trajectory,
1057-7149 © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
4638
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 24, NO. 11, NOVEMBER 2015
relative exposure along the trajectory, and scene depth. They
also inherently reveal the spatially variant property of blurring.
All these factors have an impact on blur kernels. If the
information can be fully exploited, the difficult problem
of deblurring night images will become significantly more
tractable. Therefore, we propose a novel scheme for nonuniform deblurring of night images, which fully exploits the
information implied in saturated regions for kernel initialization and estimation.
The key technical contributions in this paper include a
new representation of blur kernels, advanced algorithms for
deducing blur kernels from saturated regions, and fine-scale
estimation of blur kernels in a multi-scale deconvolution.
Specifically, the details of these contributions are listed as
follows.
• We propose a function-form representation of blur
kernels. It consists of three components – trajectory of
camera shake, intensity of exposure, and expansion of
ideal point lighting – to explicitly correspond to three
physical aspects of image capture. This representation
inherently imposes constraints on valid blur kernels.
• We propose an automatic algorithm to deduce reliable
non-uniform kernels from saturated regions, including
approximation of detected saturated regions as functionform kernels and the assignment of the deduced kernels to
uniformly partitioned image regions based on an energy
optimization.
• We convert the function-form kernels into matrix form
and further estimate them by using non-saturated regions
in multi-scale deconvolution, which enhances kernel
continuity and achieves a good trade-off between
function-form kernels deduced from saturated regions and
matrix-form kernels deduced from non-saturated regions.
Based on these three contributions, our proposed deblurring
scheme is able to achieve high-quality deblurring results of
night images by fully exploiting the information implied
in saturated regions. As shown in Fig. 1(d), (e), and (f),
our scheme successfully recovers sharp edges, preserves fine
details, and meanwhile prevents ringing artifacts compared
with the other two methods which also consider pixel
saturation in deblurring.
The rest of this paper is organized as follows. Section II
gives a brief overview of related work. Our function-form
kernel representation is proposed in Section III. Section IV
describes the algorithm initializing non-uniform kernels from
saturated regions. Section V proposes a multi-scale kernel
estimation algorithm using initial blur kernels as priors,
followed by non-blind deconvolution. Section VI presents our
experimental results and comparisons. Finally, Section VII
concludes the paper.
II. R ELATED W ORK
Deblurring is an under-determined problem because the
unknown variables (latent sharp images and blur kernels)
outnumber the known measurements (observed blurry image).
Almost all papers on deblurring study how to introduce
various priors to make the under-constrained problem solvable.
The priors can be categorized into image priors and
kernel priors. Many schemes use both priors but with different
focuses.
A. Image Priors
Fergus et al. introduced the heavy-tailed gradient distribution of natural images to solve for the blur kernel [1].
Yuan et al. used a noisy but sharp image as a prior [2].
Shan et al. introduced the spatially random distribution of
image noise and a new smoothness constraint in low-contrast
regions [3]. Joshi et al. introduced a two-color model, where
a pixel color is a linear combination of the two most
prevalent colors within a neighborhood of the pixel [14].
Krishnan et al. proposed the ratio of l1 norm and l2 norm
on the high frequencies of an image as a prior [15].
Levin et al. introduced an efficient marginal likelihood optimization for blind deconvolution [16], Ji and Wang introduced
wavelet tight frame system for better representing natural
images and served as deblurring and proposed a regularization
model for ringing remove [17].
Using the statistical properties of images (e.g., distribution,
correlation, and norm) as priors only provides common but
coarse information concerning latent sharp images. The help
they provide is limited for deblurring. Although Yuan’s scheme
provides a good image prior closely related to the sharp
image [2], it requires taking two photos. Compared with image
priors, kernel priors characterize a physical model of camera
shake and thus are more accurate and effective. Our work in
this paper can be classified as a method based on kernel priors.
B. Kernel Priors
To solve the blind/semi-blind deconvolution problem, early
research poses prior parametric knowledge on kernels such
that a blur kernel can be obtained by only estimating a
small number of parameters [18]. Tekalp et al. assumed the
camera motion is with a uniform velocity and modeled the
kernel as a constant line segmentation [19], [20]. The uniform velocity assumption was later relaxed by introducing an
accelerated velocity parameter in kernel modeling [18], [21].
The parameter of focus is presented in [22] to model the
out-of-focus blur kernel by a circle with uniform intensity.
Though easy to solve, these parametric models only represent
very limited simple camera motions such as line motion [23].
In contrast, our function-form kernel models blur kernels
with three components (trajectory, expansion, and intensity)
following the inherent physical meaning of camera motion and
is capable to represent complex camera motions (e.g. the one
shown in Fig. 2(a)).
Regularization based kernel prior has also been investigated
for image deblurring. Cai et al. introduced sparse constraints
on the curvelet coefficients of blur kernels [24]. Joshi et al.
estimated spatially variant blur kernels via blurry edges
and their sharpened versions [25]. Xu and Jia proposed a
two-phase algorithm to estimate blur kernels from selected
edges [6]. Hirsch et al. introduced a framework of efficient
filter flow for fast deconvolution with spatially variant
kernels [26]. Harmeling et al. introduced a taxonomy of
camera shakes to study spatially variant blur kernels [27].
LIU et al.: DEBLURRING SATURATED NIGHT IMAGE WITH FUNCTION-FORM KERNEL
Fig. 2. Matrix-form and function-form kernel representations. (a) Matrixform kernel. (b) Function-form kernel. (Better view in electronic version).
Xu and Jia estimated non-uniform blur kernels using depth
information [28].
There are some kernel priors deduced from the 6D homograph model. Gupta et al. modelled camera shake with a
motion density function (MDF) and derived kernel priors
from the MDF [29]. Tai et al. modelled camera motion
by the proposed projective motion path [30]. Whyte et al.
proposed a parametrized model of camera rotation as opposed
to translation [31]. Xu et al. simplified the homograph model
to translation and in-plane rotation [32]. But a challenging
problem with these priors is that it is still hard to recover
the real camera shake. Joshi et al. proposed exploiting inertial
measurement sensor data to recover the real trajectory of the
camera shake during exposure [4].
In contrast to previous work on kernel priors, our blur
kernels are automatically deduced from saturated regions in
night images, where edge-based schemes are generally ineffective due to low contrast. Our scheme does not need to
introduce a stereo camera [28] or additional sensors [4], and
can effectively be applied to images taken at night.
C. Saturated Regions
There exist two contrasting approaches for handling
saturated regions in image deblurring. Several papers suggest
removing them in deconvolution. Cho et al. proposed
excluding saturated regions in image deblurring but
completing them by inpainting from neighboring pixels
once an image is deblurred [8]. Whyte et al. argued
to separate saturated pixels from deblurring and proposed
including the saturation process in deconvolution by modelling
the saturated sensor response using a smoothing function [9].
When multiple frames are available, Harmeling et al.
proposed weighting out saturated pixels from the deblurring
process but filling them by exploring frame correlation in
blind deconvolution [10].
Other papers suggest digging out the information implied
in saturated regions. Hua and Low pointed out that light
streaks in blurred images approximate motion paths of camera
shake [11]. They thus manually select patches containing a
noticeable light streak as a kernel prior for motion deblurring.
Queiroz et al. presented an automatic scheme to detect a map
of high-intensity streaks and use one of them as a prior for
restoration [12].
4639
The light streaks are also detected in [13] for image
deblur. Hu et al. [13] propose a non-linear blur model via
a clipping function to depict light streaks and then poses
this model as constraints for estimating the blur kernel in
an optimization framework, which jointly considers light
streaks and non-saturated information for kernel estimation.
Although dealing with the similar problem, our method is
quite different from the one presented in [13]. First, in terms
of kernel representation, we propose a new function-form
kernel representation which significantly helps to preserve
continuity of initial kernels ignored in the traditional matrixform kernel representation in [13]. Second, the method
in [13] deals with spatial invariant kernels only whereas
our scheme supports spatial variant kernels by proposing an
energy-based non-uniform kernel initialization. Third, due to
the function-form kernel representation, our scheme is able to
enhance the accuracy of kernel estimation by introducing
the function-form regularization whereas [13] iteratively
adopts the expectation-maximization (EM) method and
Richardson-Lucy (RL) method to suppress ringing artifact.
In contrast to previous schemes, our scheme not only detects
multiple saturated regions automatically, but more importantly,
we propose the function-form kernel representation to depict
the inherent physical meaning of blur kernels and to deduce
kernels from saturated regions. Our function-form kernel may
be limited in terms of introducing certain limitation in comparison with matrix-form kernels. However, the limitation fits
the physical meanings of blur perfectly as camera motion
is always continuous. We also introduce a novel energybased function to generate spatially-variant kernels based on
deduced kernels. Given a set of function-form kernels, our
non-uniform kernel estimation is able to consider both the
accuracy and the spatial continuity of kernels in an optimal
way. At last, we proposes the deduced non-uniform kernels
as constraints for the estimation of final blur kernels in a
multi-scale deconvolution and further emphasize the continuity
of kernels via function-form kernel regularization so as to
enhance the accuracy of kernel estimation.
III. F UNCTION -F ORM K ERNEL R EPRESENTATION
We present our function-form kernel representation in this
section. As shown in Fig. 2(a), a blur kernel K is represented
as a 2D matrix in current deblurring schemes. The blurring is
thus formulated as
B = I ⊗ K + N.
(1)
I is the latent sharp image and B is the observed blurry
image. N is additive noise and ⊗ is the convolution operator.
Such a representation of the blur kernel is simple and easy
for deconvolution. Unfortunately it does not reflect the mechanisms of the camera system, so important physical meanings
of the blur kernel may be overlooked.
In this paper we propose alternative blur kernel representation in function form with three components,
K u,v (x, y) = {c(u, v), w(u, v), G σ (x − u, y − v)},
(2)
where (u, v) is a point in the kernel plane as shown
in Fig. 2(b).
4640
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 24, NO. 11, NOVEMBER 2015
c(u, v), as illustrated by the red curve in Fig. 2(b),
represents one point in the 2D trajectory of camera shake
projected to the imaging plane. c(u, v) = 1 if the camera
shake passes location (u, v). The trajectory is denoted by
C = {c(u, v)|c(u, v) = 1}. The advantage of introducing
the trajectory is clear. It must be a narrow and continuous
curve from any non-zero point to another non-zero point in
the plane. When non-zero points of an estimated blur kernel
are discontinuous in the plane, we immediately know that the
kernel estimation must be wrong. This important property can
serve as a kernel prior in deconvolution.
w(u, v), as illustrated by the central brightness of circles
in Fig. 2(b), is the intensity of exposure at (u, v). We assume
that the brightness of lighting sources is constant during
exposure. With this assumption, the intensity w(u, v) is
proportional to the time spent at (u, v). We can observe that
inflection points of the trajectory in Fig. 2(b) have a larger
intensity w(u, v) because it takes more time there to change
camera motion. In general, the estimation at inflection points
of the trajectory is more reliable than at other points. It can
also serve as a kind of kernel prior.
G σ (x − u, y − v), as illustrated by the area of circles
in Fig. 2(b), is a 2D zero-mean Gaussian function, which
characterizes the expansion of an ideal point lighting source in
the imaging plane. σ is the standard deviation. It is determined
by the camera focus, scene depth, and camera motion in
the perpendicular direction to the image plane. The spatially
non-uniform deblurring processes different regions of blurry
images using different blur kernels. In the function-form
representation, it can be described more precisely as different
blur kernels have different σ but share a similar trajectory C.
This can also serve as a kernel prior in non-uniform deblurring.
Given a blur kernel K , its matrix representation can be
written as
K u,v (x, y),
K =
(u,v)
=
w(u, v)G σ (x − u, y − v).
(3)
C
In other words, the matrix-form blur kernel can be represented
by a mixed Gaussian. The corresponding formulation of the
blurring Eq. (1) can then be rewritten as
w(u, v)G σ (x − u, y − v) + N.
(4)
B=I⊗
C
Therefore, the function-form representation and matrix-form
representation can be interchanged for different purposes.
In the following deblurring process, we use these
two kind of representations alternatively. We adopt the
function-form kernel representation for initialization and regulation as well since it well preserves physical meanings of blur
caused by camera motions. Camera motions, which can cause
blur in dim light, are always continuous. A camera cannot
be in different places or move toward two directions at the
same time. This kind of features are inherently supported in
the function-form representation which is relatively restrictive but quite reasonable. We then utilize the matrix-form
representation for deconvolution to facilitate calculation and
refinement.
IV. N ON -U NIFORM K ERNEL S ELECTION
In this section, we present how to deduce non-uniform
kernels from saturated regions. The proposed processing is
shown in Fig. 3. We first detect saturated regions from a
blurry image. Then the detected saturated regions are used
to initialize the function-form kernels. At last, we generate
spatially-variant blur kernels via minimizing a proposed
energy function.
A. Saturated Region Detection
It is common in blurry night images to observe saturated
regions of similar shape, which is caused by lighting sources
(e.g., lamps) captured with a slow shutter speed. Although
there are different shapes in different images, saturated regions
share a unique property that the pixels of lighting sources
have relatively higher intensities than other pixels. Therefore,
we first perform Laplacian of Gaussian (LoG) filtering on a
blurry image B(x, y) to extract edges of saturations,
1, G σ0 ⊗ B(x, y) > T
Be (x, y) =
(5)
0, other wi se,
where G σ0 denotes the LoG filter and σ0 is the standard
deviation. T is a threshold to retain strong edges. We further
filter out isolated points and short edges, which are unlikely
to reflect blur kernels.
One can easily observe that there are still some falsely
detected edges. The false cases, mainly arising from nonperfect point light sources and the detection method as well,
should be removed so as to enhance the reliability of kernel
initialization. We propose using the function-form representation to facilitate noise removal as well as kernel initialization.
B. Function-Form Kernel Initialization
For the edge image Be (x, y) in Fig. 3(b), after removing
isolated pixels and short edges, we deduce function-form
kernels from the detected saturated regions.
For each blur kernel, the trajectory of camera shake C
is initialized by the central skeleton of the corresponding
saturated region based on discrete local symmetry [33]. Taking
the saturated region shown in Fig. 4(a) as an example, we
first put vertices along the boundary of the saturated region
and generate a triangular mesh (denoted by blue lines) using
the vertices via a Delaunay triangulation. Then we select the
midpoints of two internal edges for a triangle with two neighboring vertices, or the centroid point for a triangle without
neighboring vertices. Collecting all selected points (denoted by
green points) generates the final skeleton. More details about
the skeleton extraction are described in [33].
Assuming that the saturated region is caused by a point
light source, the parameter σ in the function-form blur kernel
should be the same at different (u, v). Thus w(u, v) and
σ can be solved by
w(u, v)G σ (x − u, y − v) 22 , (6)
arg min BeK (x, y) −
w(u,v),σ
C
LIU et al.: DEBLURRING SATURATED NIGHT IMAGE WITH FUNCTION-FORM KERNEL
4641
Fig. 3. Non-uniform kernel initialization. (a) Blurry image. (b) Strong edges map extracted from blurry image using LoG filtering from Eq. 5. From (b), after
filtering out isolated points and short edges, function-form kernel representation will be performed on each detected saturated region to get (c). (d) selected
kernels after non-uniform kernel selection using energy function Eq. 8. (a) Blurry image. (b) Extracted strong edges. (c) Deduced function-form kernels.
(d) Selected non-uniform blur kernels.
Fig. 4. Individual function-form kernel initialization. (a) Saturated region,
(b) saturated region with triangle mesh in blue and selected points in green,
(c) deduced curve of {w(u, v) × c(u, v)} in the function-form kernel, and
(d) deduced blur kernel. (Better view in electronic version).
where C is known. BeK (x, y) denotes a patch covering the
saturated region as shown in Fig. 4(a), which is normalized
as | BeK |= 1.
Estimating w(x, y) in Eq. (6) directly is difficult as there
are two unknown variables without regularizations and w(x, y)
has non-zero values only at the trajectory points. We thus
give an approximate solution as in Eq. (6) which manages
to approach the solution to Eq. (7) by updating σ and w(x, y)
iteratively. For a given σ , we calculate w(u, v) by
BeK (x, y) G σ (x − u, y − v),
(7)
w(u, v) =
(x,y)
where is the dot product of two matrices. With calculated
w(u, v), we can update σ by searching the minimization
of Eq. (6).
The final w(u, v) and σ are be updated iteratively until converging. For the saturated region shown in Fig. 4(a), Fig. 4(c)
presents the derived curve of {w(u, v) × c(u, v)}. We can
observe that the curve has similar brightness variation to that
of the saturated region. Fig. 4(d) shows the deduced blur kernel
from the saturated region.
Fig. 3(c) shows all deduced function-form kernels
from Fig. 3(b). It is achieved by first eliminating isolated
points and short edges with length shorter than 4 pixels and
then performing the function-form kernel initialization for
each remaining edge as illustrated in Fig. 5. Some detected
saturated regions and the deduced corresponding functionfrom kernels are compared in Fig. 5. We can observe that
the detected saturated regions tend to have certain noise
and distortion due to the complicated background and light
sources. The deduced kernels not only well preserve the
physical meaning of blurring but also help to suppress the
detection noise.
C. Non-Uniform Kernel Initialization
For the deduced function-form kernels, we can convert them
to a set of candidate matrix-form kernels = {K i } according
to Eq. (3). One way to use the kernels is through non-uniform
kernel selection according to their spatial locations. However,
we notice that there are some falsely detected kernels that
should not be used for deblurring and the spatial distribution
of kernels is often far from uniform. Thus we propose an
4642
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 24, NO. 11, NOVEMBER 2015
Fig. 5. Saturated regions and the corresponding function-form kernels. The
top row shows detected saturated regions and the bottom one shows deduced
blur kernels correspondingly.
algorithm to select non-uniform kernels from by minimizing
an energy function.
In our scheme, we partition a blurry image into M1 × M2
regions with overlapping margins, where both M1 and M2 are
integers. Similar to [26], [27], and [34], we assume that the
kernel in each region is uniform. Then we select kernels for
all the blurry regions {Br } by optimizing the energy function
D( fr ) + λ
S( fr , fn ),
(8)
E(F) =
r,n∈N
r
where F is the index set of the finally selected non-uniform
kernels. fr and f n are the indices of kernels used in regions
Br and Bn . N denotes the set of all pairs of neighbor
regions.
D( fr ) in Eq. (8) is a data function measuring the accuracy
of kernel K f r at region Br . It is usually evaluated by the
deconvolution region, which can be approximated via the
gradient distribution (a heavy tailed distribution) [1].
In blur image restoration, l1/l2 norm [15] has been widely
adopted as a prior to approximate gradient distributions of
images. However, when dealing with night images, it is not
capable enough to distinguish gradient distributions using
different kernels as demonstrated in Fig. 6. Fig. 6(a) shows the
responses of l1/l2 norm on gradient using two million sharp
and blur patch pairs randomly sampled from sharp and blur
image pairs. Fig. 6(b) presents the corresponding responses
of Kurtosis [35]. We observe Kurtosis can better distinguish
gradient distributions among kernels compared with l1/l2.
Thus we propose using Kurtosis to estimate accuracy of
kernels as
f
Dr ( fr ) = K ur t (Ir r ),
(9)
f
where Ii r denotes a deconvolution region of Br using the
kernel indexed with fr and
K ur t (X) =
E(X − μ)4
μ4
= 4,
(E(X − μ)2 )2
σ
(10)
where X is the vector, μ is the mean value of X, μ4 is the
fourth moment about the mean and σ is the standard deviation.
S( fr , f n ) in Eq. (8) is a smoothness term that evaluates the
similarity between neighboring kernels.
S( fr , f n ) =
arg min
θ∈[−θ0 ,θ0 ],η∈[−η0 ,η0 ]
Rθ,η (K fr ) − K fn 1 ,
Fig. 6. Distribution on gradient level (using two million sharp and blur patch
pairs randomly sampled for natural sharp night images and corresponding
burry images synthesized via blur kernels. The patch size is 32 × 32.)
(a) Response distribution on gradient of patches using l1/l2 norm.
(b) Response distribution on gradient of patches using kurtosis measurement.
(Better view in electronic version).
(11)
where K fr and K fn denote the kernels indexed with fr and
f n in the regions Br and Bn , Rθ,η () is a rotation and scale
operator ranging from −θ0 to θ0 and −η0 to −η0 .
By minimizing the energy function (8), we can determine
the initial kernel for each region with regard to both sharpness
and continuity through the image. The optimization problem
can be solved iteratively by graph cuts [36], resulting in the
deduced non-uniform blur kernels as shown in Fig. 3(d). The
kernels in 3 (d) are spatial varying and the trajectory of each
kernel in each region are quite similar to the light streaks
appeared in Fig. 3(a), which demonstrates the effectiveness of
non-uniform kernel selection.
LIU et al.: DEBLURRING SATURATED NIGHT IMAGE WITH FUNCTION-FORM KERNEL
4643
Fig. 7. Coarse-to-fine kernel estimation with regularization (first two scales). The kernel estimation is first performed using the down-sampled blurry image
as well as down-sampled initial kernels (at scale M). After regularization, the estimated blur kernels are up-sampled to a higher resolution (at scale M − 1),
which will be used as initial kernels for the kernel estimation at the higher resolution. This process is repeated M times to finally obtain estimated kernels at
the original resolution.
The proposed non-uniform energy minimization can
gracefully select kernels satisfying not only the local property
but also the global non-uniform property. In most cases each
local patch may not contain kernel candidates since saturated
lights are randomly distributed, but via the non-uniform kernel
selection, the local regions can select most suitable kernel
from all the kernel candidates. The local smooth term between
neighbor kernels also can make the method more robust since
some regions may lack the ability to distinguish the kernels
such as sky and smooth regions. The non-uniform algorithm
doesn’t reply on plenty of light sources, if the input image
contain only one, it will degrade to uniform kernel selection
and more discussion can be found in experiment part.
V. D EBLURRING W ITH D EDUCED K ERNELS
We can directly apply the initial kernels, as exemplified
in Fig. 3(d), for non-blind deconvolution. Such a simple
method can get very good results if the kernels are well
deduced. However, the initial kernels, though enhanced by the
function-form approximation and non-uniform optimization,
are often not accurate enough since saturated regions may not
be generated by ideal point light sources. More importantly,
the kernels are deduced from only saturated regions
while ignoring non-saturated ones which are often more
informative.
A. Non-Uniform Kernel Estimation
Therefore, we further refine the deduced kernels by
non-uniform kernel estimation and generate the final result
by deconvolution with the estimated kernels. As illustrated
in Fig. 7, we employ the coarse-to-fine strategy for kernel
estimation with a downsample ratio of M, which provides
flexibility on adjusting kernel sizes. Our kernel estimation
is first performed on the downsampled blurry image as
well as downsampled initial kernels. After regularization, the
estimated blur kernels are upsampled to a higher resolution
(at scale M − 1), which will be used as initial kernels for
kernel estimation at the higher resolution. This process is
repeated M times to finally obtain estimated kernels at the
original resolution.
We propose using the initial kernels as priors and
estimating blur kernels by utilizing non-saturated regions.
Moreover, since we have already imposed the smoothness
between adjacent regions in kernel initialization, we simplify
the kernel estimation to each region Br with the corresponding
kernel prior as
F(K r ) = arg min Wr1 ∇ Br − ∇ Ir ⊗ K 22 +λ1
∇ Ir ,K
∇ Ir 1
∇ Ir 2
+ λ2 Wr2 K 1 .
(12)
Here is the element-wise operator, λ1 and λ2 are weighting
scalar. Ir denotes a latent version of Br . Wr1 and Wr2 are
weight matrices determined by the latent region Ir and the
initial kernel of Br , respectively. K and K r are matrix-form
Ir 1
kernels to facilitate deconvolution. ∇
∇ Ir 2 is the l1/l2 norm
proposed in [15] to avoid delta kernel estimation.
The weight matrix Wr1 is used to reduce the effect of outliers
from the linear blur formation assumption. Similar to [8], we
generate Wr1 adaptively to penalize pixels which are outliers
(e.g. saturated and noise pixels) by setting smaller values to
them as follows:
Wr1 (x, y) = ex p(−
∇ Br (x, y) − (∇ Ir ⊗ K )(x, y)2
).
2πσ 2
(13)
The weight matrix Wr2 introduces a kernel constraint to
the estimation. It is determined by the initial kernel K ro of
region Br as
Wr2 = 1 − T(K ro ) ⊗ G,
(14)
where T(K ) extracts the trajectory and intensity of the kernel
K and G denotes a Gaussian filter. We exclude the expansion
of the initial kernel here to reduce the side effect of light source
shape. Fig. 8 shows an example matrix Wr2 given a initial
kernel K ro . We can observe that in Wr2 the nearer the element
is to the trajectory, the smaller its weight value. This helps us
to achieve a good balance between fully utilizing the initial
blur kernels and getting more accurate kernel information
from non-saturated regions. Through the proposed kernel prior
denoted by the third term in Eq. (12), our kernel estimation
not only makes use of the kernel prior deduced from saturated
regions but also helps to preserve spatial smoothness between
regions.
Since Eq.(12) is a non-convex objective function and has
three unknown variables, we solve Eq.(12) by updating these
4644
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 24, NO. 11, NOVEMBER 2015
Fig. 8. Matrix Wr2 . (a) Initial kernel K ro and (b) Kernel constraint Wr2 .
Given the initial kernel K ro , the kernel weight constraint is obtained via
Eq. (14). We can observe that in Wr2 the nearer the element is to the trajectory,
the smaller its weight value.
points and isolated short curves. After regularization, a clean
and continuous version is obtained as denoted in Fig. 9(b).
Fig. 9(c) and Fig. 9(d) present two deblurred results generated
using kernels in Fig. 9(a) and Fig. 9(b), respectively. One can
easily notice that the ringing artifacts in Fig. 9(c) have been
greatly suppressed in Fig. 9(d) due to regularization.
Notice that our function-form kernel regularization is introduced to find optimal kernels in a kernel estimation framework
by enhancing the continuity of kernels and reducing isolated
points. It is not proposed to deal with imperfection of kernel
estimation as the problem addressed in [17] and [38] because
our scheme has much reliable initial kernels via our functionform kernel initialization based on saturated regions. Methods
presented to handle inaccurate blur kernels may be adopted,
such as the wavelet tight framework [17] and [38], to further
enhance the performance of our scheme in case initial kernels
cannnot well estimated.
B. Non-Blind Deconvolution
After kernel estimation, we assign each region Br a blur
kernel K r and produce a latent version Ir by solving
arg minWr1 Br − Ir ⊗ K r 22 +λ3 ∇ Ir 1 ,
(15)
Ir
where Wr1 is defined in Eq. (13) which deals with saturated
regions and λ3 is a weight factor. When all the regions are
processed, we stitch them together and produce the final latent
image. We would like to point out that we do not adopt any
complicated methods for merging regions but simply average
the overlapped pixels. We maintain smoothness between adjacent regions by imposing the smoothness constraint to blur
kernel initialization and the initial kernels are also used to
guide the generation of estimated kernels.
VI. E XPERIMENTAL R ESULTS
Fig. 9. Kernel regularization. Kernel regularization. (a) Is the immediate
estimated kernel before regularization, (b) is the deduced kernel after regularization using function-form kernel representation, (c) and (d) are two deblurred
results generated using kernels from (a) and (b), respectively.
three variables iteratively. We first update Wr1 (x, y) according
to Eq.(13) given initial blur kernels. Then ∇ Ir is updated
via l1/l2 regularization optimization [15] and K is updated
via Iterative Reweighted Least Square [37].
Distinct from other coarse-to-fine methods, we would
like to point out that we propose using the function-form
representation to enhance the continuity of the kernel prior in
the regularization. Specifically, we approximate the estimated
matrix-form kernels at each scale (except the original resolution) by function-form kernels and then use the regularized
kernels for the following scale processing. For one estimated
kernel K rM at scale M, we treat it as a patch and use the method
presented in Section IV-B for function-form kernel generation.
For example, Fig. 9(a) shows one estimated kernel without
regularization. We can observe that it has several breaking
We evaluate the performance of our proposed deblurring
scheme in comparison to those of state-of-the-art methods,
including generic ones (i.e. [1], [15], [39], [34], and [40]) and
deblurring schemes based on light streaks [13] (just released)
and handling saturation [9]. In the following, we will first
introduce the details of our implementation and parameters
for generating all the test results. Then we will present
comparative results on both synthetic and real examples.
A. Parameters and Implementation
In our tests, all the parameters are either fixed or determined
automatically.
• In the detection of saturated regions, the threshold T
in Eq. 5 is automatically determined so that 2% of the
strong edges are preserved.
• In non-uniform kernel selection, the factor λ for weighting the neighbor smoothness is set to 10.0 in Eq. 8. The
rotation in Eq. 11 runs from -15 to 15 at steps of 5,
where θ0 = 15, scale is constant, which means η = 1 for
less computation. The size of regions M1 × M2 is set as
adaptively as the image resolution, here M1 = maxsize
300 LIU et al.: DEBLURRING SATURATED NIGHT IMAGE WITH FUNCTION-FORM KERNEL
4645
Fig. 10. Comparison with five state-of-the-art generic deblurring methods on two real examples, ?Yacht? and ?Christmas Socks?. For each example, we
show the original image at first followed by an enlarged region denoted in the red box. Then six results generated by Fergus et al. 2006, Krishnan et al. 2011,
Goldstein and Fattal 2012, Zhong et al. 2013, Xu et al. 2013, and our method are presented, respectively. (a) Yacht. (b) Blurry region. (c) Fergus et al. 2006.
(d) Krishnan et al. 2011. (e) Goldstein and Fattal 2012. (f) Zhong et al. 2013. (g) Xu et al. 2013. (h) Our method. (i) Christmas Socks. (j) Blurry region.
(k) Fergus et al. 2006. (l) Krishnan et al. 2011. (m) Goldstein and Fattal 2012. (n) Zhong et al. 2013. (o) Xu et al. 2013. (p) Our method.
and M2 = M1 , where ∗ means the ceil operator and
maxsi ze means the max size of the image.
• In non-uniform kernel estimation, λ1 = 0.01 and
λ2 depends on the detected kernel size h which is set
as λ2 = 0.02 × h in Eq. 12. The scale is fixed to M=3
in the coarse-to-fine estimation.
• In deconvolution, the weight factor λ3 is 0.003 in Eq. 15.
We tested on 34 night images (4 synthetic ones used
in [13] and 30 real examples). We implemented our method
in MATLAB and conducted experiments on a PC with dual
3.74 GHz Core Intel i-7 CPU and 16GB RAM. For an image
of size 1500 × 1000, the saturated region detection takes less
than 1 seconds. The function-form kernel initialization for one
saturated region needs less than 0.1 second.
The computational cost in calculating the non-uniform
kernels in our scheme depends on the complexity of graph-cut
minimization. We adopt the minimization algorithm in [41].
Assuming there are M regions and N selected kernels,
the trivial upper bound of the complexity of the graph-cut
minimization is O(M Nk2 |c|), where |c| is the cost of the
minimum cut, and for typical vision problem it can give a near
linear performance [41]. It takes nearly 30 minutes to finish the
non-uniform kernel selection when the kernel number is 80.
In the following, we will first evaluate the effectiveness of
the function-form kernel representation and then present visual
examples for comparison to other methods. Due to limited
space, we will only present the details of deblurring results
at local regions. A complete description and more results,
including deblurred images and deduced blur kernels, can be
found in the supplementary material.
B. Effectiveness of Function-Form Kernel Representation
We evaluate the effectiveness of our function-form kernel
representation by comparing estimated kernels as well as
deblurred images generated with or without the functionform kernel representation. Results are generated using our
4646
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 24, NO. 11, NOVEMBER 2015
Fig. 11. Comparison with saturation-based methods on synthetic examples. From top to bottom: blurry image, cropped region with groundtruth kernel, results
generated by Whyte et al. 2011, Hu et al. 2014, and our method, respectively. (a) Car. (b) Building. (c) Garden. (d) Parking.
implemented scheme, as shown in Fig. 12. We can easily
observe that our function-form kernel representation benefits
both kernel initialization and estimation, so that achieves much
better results than the one without it.
C. Non-Uniform Kernel Selection
We would like to point out that our energy-based nonuniform kernel initialization enables our scheme to be fully
independent from any assumption on the number of light
sources or the spatial distribution of kernels. Our nonuniform kernel initialization starts from a candidate kernel set
{K i }(i >= 1) which does not contain any location information
of kernels. For each region, we propose assigning one initial
kernel selected from {K i } by minimizing the energy function
Eq. (8) with regard to both the accuracy of kernels and
the spatial smoothness among adjacent kernels. In special,
when i=1, our scheme will assign each region the kernel
LIU et al.: DEBLURRING SATURATED NIGHT IMAGE WITH FUNCTION-FORM KERNEL
4647
Fig. 13.
Cumulative error ratio histgram on the synthetic dataset.
TABLE I
Q UANTITATIVE E VALUATION OF U SING N O -R EFERENCE M ETRIC
P ROPOSED IN [43]. T HE H IGHER THE VALUE , THE B ETTER THE
Q UALITY. O UR S CHEME A CHIEVES THE B EST R ESULTS
Fig. 12. Deblur with and without our function-form representation. From
top to bottom: deblurred image regions, initial kernels, and estimated kernels.
(a) Without the function-form representation. (b) With the function-form
representation.
K 1 via Eq. (8). In the following step, non-uniform kernel
estimation, our scheme will further refine the kernel of each
region by a coarse-to-fine strategy using the initial kernel
as prior via Eq. (12). Our experimental results will also
demonstrate the effectiveness of our scheme. For example,
there is only several light sources concentrated in the center
of the bottom of Fig. 14(c) whereas our scheme successfully
deduces non-uniform kernels for all the image regions.
D. Comparison With Generic Methods
In this subsection, we compare our scheme with five
state-of-the-art generic deblurring methods. Among them,
four schemes including [1], [15], [39], and [40] estimate
uniform kernels whereas [34] deduces non-uniform kernels
for deblurring. The deblurred images are presented in Fig. 10.
We can easily observe that these generic methods produce poor
results because night images with low contrast and saturated
regions reduce the effectiveness of their kernel estimation.
In contrast, our scheme outperforms all the other methods and
generates much cleaner and vivid latent images.
E. Comparison With Saturation-Based Methods
In this subsection, we evaluate our scheme in comparison
with the methods presented in [9] and [13]. The former method
estimates a uniform kernel from light streaks in low-light
images and the latter one proposes modelling saturation for
non-blind deconvolution. Fig. 11 shows comparison results on
synthetic examples. Our scheme achieves the most clean and
sharp results as demonstrated by the four cropped regions.
Similar higher quality outputs can be also observable in Fig. 14
which presents comparison results on real examples. One can
easily notice that our scheme provides vivid latent images
whereas Whyte et al.’s method fails to estimate the blur
kernels and still has light streaks caused by blur (e.g. as shown
in (b) and (d)), and Hu et al.’s scheme produces visible ringing
artifacts.
F. Comparison on Synthetic Images
We further evaluate the performance of our scheme in terms
of the objective quality. We adopt the quantitative evaluation,
error-rate histogram, originally presented in [42]. In this test,
we use the synthetic dataset provided in [13] which contains
154 blurry images produced from 11 low-light images using
14 blur kernels. As shown in Fig. 13, our scheme achieves the
best performance among all the other five methods.
We also evaluate the perceptual quality of deblurred results
on synthetic images using no-reference metric proposed
in [43]. The metric incorporates features that capture common
deblurring artifacts and is learned based on a user-labeled
dataset. Table 1 shows the average score of each method.
Our method achieves the highest visual score which further
validates the effectiveness of our method.
We further evaluate our scheme with two kernels with
complex trajectories from [2]. As shown in Fig. 11, our scheme
is capable to deduce these complex kernels successfully and
achieve the best results compared with the other two related
algorithms.
G. Limitations and Discussions
This work is based on the assumption that night images contain saturated regions produced by light sources that illuminate
4648
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 24, NO. 11, NOVEMBER 2015
Fig. 14.
Comparison with saturation-based methods on real examples. From top to bottom: blurry image, cropped region, results generated by
Whyte et al. 2011, Hu et al. 2014, our method and estimated kernels, respectively. (a) Market. (b) Castle. (c) Shop. (d) Magic pagoda.
the scene. However, in cases where this assumption does not
hold, the proposed scheme may fail. Another concern is in
dealing with very large saturated regions, as it is difficult to
accurately restore pixel values in a large saturated region using
current deconvolution schemes even when a good blur kernel
is estimated.
On the other hand, our proposed function-form kernel
representation can be beneficial for generic deblurring. Even
for images without saturated regions, we can also utilize the
function-form representation for common kernel estimation
as described and demonstrated in subsection 5.1 and 6.2
for kernel regularization. Our energy-based non-uniform
kernel initialization may also be extended to support generic
deblurring. We would like to investigate these directions in
future work.
The promising results presented in this paper have motivated us to consider the deblurring problem from a new
viewpoint, namely, deducing blur kernels from blurry images
LIU et al.: DEBLURRING SATURATED NIGHT IMAGE WITH FUNCTION-FORM KERNEL
ahead of deblurring. In our experience, even without saturated
regions, people can easily guess the orientation, size, and even
the exact shape of camera shake trajectories from the content
of blurry images. With the recent development in big data
and deep learning, it may be possible to develop intelligent
algorithms that could deduce blur kernels from a wide range
of cues beyond saturated regions. This will also be one of our
future research directions.
VII. C ONCLUSION
In this paper, we proposed a novel method to make
use of saturated regions in night images for image deblur.
We propose an alternative kernel representation, functionform kernel representation, to explicitly correspond the
physical meanings of image blur. We thus produce functionform kernels with regard to saturated regions in blur images.
Given the function-form kernel set, we then propose the
first energy-based non-uniform kernel initialization to deduce
spatial invariant kernels by considering both the accuracy of
kernels and the spatial smoothness among adjacent kernels,
which is independent from any assumption on the number of
saturated regions or the spatial distribution of kernels. At last,
we estimate spatial variant kernels by introducing functionform regularization to enhance the accuracy of estimated
kernels. Experiments over various challenging night images
show that our proposed scheme consistently achieves advanced
performance.
Although our scheme is based on existence of light sources,
our algorithm can be beneficial to other generic deblurring,
especially the function-form representation. Non-uniform
kernel selection can also extend to non-uniform scheme with
limited initial kernels given. We would like to extend our
function-form kernel representation to more generic cases in
our future work.
R EFERENCES
[1] R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman,
“Removing camera shake from a single photograph,” in Proc. ACM
SIGGRAPH, 2006, pp. 787–794.
[2] L. Yuan, J. Sun, L. Quan, and H.-Y. Shum, “Image deblurring with
blurred/noisy image pairs,” ACM Trans. Graph., vol. 26, no. 3, pp. 1–9,
2007.
[3] Q. Shan, J. Jia, and A. Agarwala, “High-quality motion deblurring from
a single image,” ACM Trans. Graph., vol. 27, no. 3, pp. 73:1–73:10,
Aug. 2008.
[4] N. Joshi, S. B. Kang, C. L. Zitnick, and R. Szeliski, “Image deblurring
using inertial measurement sensors,” ACM Trans. Graph., vol. 29, no. 4,
pp. 30:1–30:9, Jul. 2010.
[5] S. Cho and S. Lee, “Fast motion deblurring,” ACM Trans. Graph.,
vol. 28, no. 5, p. 145, Dec. 2009.
[6] L. Xu and J. Jia, “Two-phase kernel estimation for robust motion
deblurring,” in Proc. Eur. Conf. Comput. Vis. (ECCV), 2010,
pp. 157–170.
[7] Y.-W. Tai and S. Lin, “Motion-aware noise filtering for deblurring of
noisy and blurry images,” in Proc. CVPR, Jun. 2012, pp. 17–24.
[8] S. Cho, J. Wang, and S. Lee, “Handling outliers in non-blind
image deconvolution,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV),
Nov. 2011, pp. 495–502.
[9] O. Whyte, J. Sivic, and A. Zisserman, “Deblurring shaken and
partially saturated images,” in Proc. IEEE Int. Conf. Comput. Vis.
Workshops (ICCV Workshops), Nov. 2011, pp. 745–752.
[10] S. Harmeling, S. Sra, M. Hirsch, and B. Scholkopf, “Multiframe blind
deconvolution, super-resolution, and saturation correction via incremental EM,” in Proc. 17th IEEE Int. Conf. Image Process. (ICIP), Sep. 2010,
pp. 3313–3316.
4649
[11] B.-S. Hua and K.-L. Low, “Interactive motion deblurring using light
streaks,” in Proc. ICIP, 2011, pp. 1553–1556.
[12] F. Queiroz, T. I. Ren, L. Shapira, and R. Banner, “Image deblurring
using maps of highlights,” in Proc. IEEE Int. Conf. Acoust., Speech
Signal Process. (ICASSP), May 2013, pp. 1608–1611.
[13] Z. Hu, S. Cho, J. Wang, and M.-H. Yang, “Deblurring low-light images
with light streaks,” in Proc. CVPR, Jun. 2014, pp. 3382–3389.
[14] N. Joshi, C. L. Zitnick, R. Szeliski, and D. J. Kriegman, “Image
deblurring and denoising using color priors,” in Proc. IEEE Conf.
Comput. Vis. Pattern Recognit. (CVPR), Jun. 2009, pp. 1550–1557.
[15] D. Krishnan, T. Tay, and R. Fergus, “Blind deconvolution using a
normalized sparsity measure,” in Proc. IEEE Conf. Comput. Vis. Pattern
Recognit. (CVPR), Jun. 2011, pp. 233–240.
[16] A. Levin, Y. Weiss, F. Durand, and W. T. Freeman, “Efficient marginal
likelihood optimization in blind deconvolution,” in Proc. IEEE Conf.
Comput. Vis. Pattern Recognit. (CVPR), Jun. 2011, pp. 2657–2664.
[17] H. Ji and K. Wang, “Robust image deblurring with an inaccurate blur
kernel,” IEEE Trans. Image Process., vol. 21, no. 4, pp. 1624–1634,
Apr. 2012.
[18] H. Ji and C. Liu, “Motion blur identification from image gradients,” in
Proc. CVPR, Jun. 2008, pp. 1–8.
[19] A. Tekalp, H. Kaufman, and J. W. Woods, “Identification of image and
blur parameters for the restoration of noncausal blurs,” IEEE Trans.
Acoust., Speech Signal Process., vol. 34, no. 4, pp. 963–972, Aug. 1986.
[20] J. P. Oliveira, M. A. T. Figueiredo, and J. M. Bioucas-Dias,
“Blind estimation of motion blur parameters for image deconvolution,” in Pattern Recognition and Image Analysis. Berlin, Germany:
Springer-Verlag, 2007, pp. 604–611.
[21] K.-C. Tan, H. Lim, and B. T. G. Tan, “Restoration of real-world motionblurred images,” CVGIP, Graph. Models Image Process., vol. 53, no. 3,
pp. 291–299, May 1991.
[22] H. Yin and I. Hussain, “Blind source separation and genetic algorithm for image restoration,” in Proc. Int. Conf. Adv. Space Technol.,
Sep. 2006, pp. 167–172.
[23] M. S. C. Almeida and L. B. Almeida, “Blind and semi-blind deblurring of natural images,” IEEE Trans. Image Process., vol. 19, no. 1,
pp. 36–52, Jan. 2010.
[24] J.-F. Cai, H. Ji, C. Liu, and Z. Shen, “High-quality curvelet-based motion
deblurring from an image pair,” in Proc. IEEE Conf. Comput. Vis.
Pattern Recognit. (CVPR), Jun. 2009, pp. 1566–1573.
[25] N. Joshi, R. Szeliski, and D. Kriegman, “PSF estimation using
sharp edge prediction,” in Proc. IEEE Conf. Comput. Vis. Pattern
Recognit. (CVPR), Jun. 2008, pp. 1–8.
[26] M. Hirsch, S. Sra, B. Scholkopf, and S. Harmeling, “Efficient filter flow
for space-variant multiframe blind deconvolution,” in Proc. IEEE Conf.
Comput. Vis. Pattern Recognit. (CVPR), Jun. 2010, pp. 607–614.
[27] S. Harmeling, H. Michael, and B. Schölkopf, “Space-variant singleimage blind deconvolution for removing camera shake,” in Proc. Adv.
Neural Inf. Process. Syst., 2010, pp. 829–837.
[28] L. Xu and J. Jia, “Depth-aware motion deblurring,” in Proc. IEEE Int.
Conf. Comput. Photography (ICCP), Apr. 2012, pp. 1–8.
[29] A. Gupta, N. Joshi, C. L. Zitnick, M. Cohen, and B. Curless, “Single
image deblurring using motion density functions,” in Proc. Eur. Conf.
Comput. Vis. (ECCV), 2010, pp. 171–184.
[30] Y.-W. Tai, P. Tan, and M. S. Brown, “Richardson-Lucy deblurring for
scenes under a projective motion path,” IEEE Trans. Pattern Anal. Mach.
Intell., vol. 33, no. 8, pp. 1603–1618, Aug. 2011.
[31] O. Whyte, J. Sivic, A. Zisserman, and J. Ponce, “Non-uniform deblurring
for shaken images,” Int. J. Comput. Vis., vol. 98, no. 2, pp. 168–186,
Jun. 2012.
[32] Y. Xu, L. Wang, X. Hu, and S. Peng, “Single-image blind deblurring
for non-uniform camera-shake blur,” in Proc. Asian Conf. Comput. Vis.
Comput. Vis. (ACCV), 2012, pp. 336–348.
[33] J. J. Zou, H.-H. Chang, and H. Yan, “Shape skeletonization by identifying discrete local symmetries,” Pattern Recognit., vol. 34, no. 10,
pp. 1895–1905, Oct. 2001.
[34] L. Xu, S. Zheng, and J. Jia, “Unnatural L0 sparse representation for
natural image deblurring,” in Proc. IEEE Conf. Comput. Vis. Pattern
Recognit. (CVPR), Jun. 2013, pp. 1107–1114.
[35] L. T. DeCarlo, “On the meaning and use of kurtosis,” Psychol. Methods,
vol. 2, no. 3, pp. 292–307, 1997.
[36] V. Kwatra, A. Schödl, I. Essa, G. Turk, and A. Bobick, “Graphcut
textures: Image and video synthesis using graph cuts,” ACM Trans.
Graph., vol. 22, no. 3, pp. 277–286, Jul. 2003.
[37] I. Daubechies, R. DeVore, M. Fornasier, and C. S. Güntürk, “Iteratively
reweighted least squares minimization for sparse recovery,” Commun.
Pure Appl. Math., vol. 63, no. 1, pp. 1–38, Jan. 2010.
4650
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 24, NO. 11, NOVEMBER 2015
[38] H. Ji and K. Wang, “A two-stage approach to blind spatially-varying
motion deblurring,” in Proc. CVPR, Jun. 2012, pp. 73–80.
[39] A. Goldstein and R. Fattal, “Blur-kernel estimation from spectral irregularities,” in Proc. Eur. Conf. Comput. Vis. (ECCV), 2012, pp. 622–635.
[40] L. Zhong, S. Cho, D. Metaxas, S. Paris, and J. Wang, “Handling noise
in single image deblurring using directional filters,” in Proc. IEEE Conf.
Comput. Vis. Pattern Recognit. (CVPR), Jun. 2013, pp. 612–619.
[41] Y. Boykov and V. Kolmogorov, “An experimental comparison of
min-cut/max-flow algorithms for energy minimization in vision,” IEEE
Trans. Pattern Anal. Mach. Intell., vol. 26, no. 9, pp. 1124–1137,
Sep. 2004.
[42] A. Levin, Y. Weiss, F. Durand, and W. T. Freeman, “Understanding and
evaluating blind deconvolution algorithms,” in Proc. CVPR, Jun. 2009,
pp. 1964–1971.
[43] Y. Liu, J. Wang, S. Cho, A. Finkelstein, and S. Rusinkiewicz,
“A no-reference metric for evaluating the quality of motion deblurring,”
ACM Trans. Graph., vol. 32, no. 6, p. 175, 2013.
Lu Fang received the B.E. degree from the
University of Science and Technology of China,
in 2007, and the Ph.D. degree from The Hong Kong
University of Science and Technology, in 2011.
She is currently an Associate Professor with the
Department of Electronic Engineering and Information Science, University of Science and Technology
of China. His research interests include subpixel
rendering, computational photography, and computer
vision.
Haifeng Liu received the B.S. degree from the
Department of Electrical Engineering, University of
Science and Technology of China, in 2012, where he
is currently pursuing the Ph.D. degree. His research
interests include image deblurring and video
recognition.
Feng Wu (M’99–SM’06–F’13) received the
B.S. degree in electrical engineering from Xidian
University, in 1992, and the M.S. and Ph.D. degrees
in computer science from the Harbin Institute
of Technology, in 1996 and 1999, respectively.
He was a Principle Researcher and a Research
Manager with Microsoft Research Asia. He is
currently a Professor with the School of Information
Science and Technology, University of Science
and Technology of China. He has authored
or co-authored over 200 high quality papers
(including 50+ IEEE T RANSACTION papers) and top conference papers
on MOBICOM, SIGIR, CVPR, and ACM MM. He has 77 granted
U.S. patents. His 15 techniques have been adopted into international
video coding standards. His research interests include image and video
compression, media communication, and media analysis and synthesis.
As a co-author, he got the best paper award in the IEEE T RANSACTIONS ON
C IRCUITS AND S YSTEMS FOR V IDEO T ECHNOLOGY in 2009, PCM 2008,
and SPIE VCIP 2007. He serves as an Associate Editor of the IEEE
T RANSACTIONS ON C IRCUITS AND S YSTEM FOR V IDEO T ECHNOLOGY,
the IEEE T RANSACTIONS ON M ULTIMEDIA, and several other International
journals. He got the IEEE Circuits and Systems Society 2012 Best Associate
Editor Award. He also serves as the TPC Chair in MMSP 2011, VCIP 2010,
and PCM 2009, the TPC Area Chair on ICIP 2013 and ICIP 2012, the TPC
Track Chair on ICME 2013, ICME 2012, ICME 2011, and ICME 2009, and
Special Sessions Chair in ICME 2010 and ISCAS 2013.
Xiaoyan Sun (M’04–SM’10) received the
B.S., M.S., and Ph.D. degrees in computer science
from the Harbin Institute of Technology, Harbin,
China, in 1997, 1999, and 2003, respectively.
Since 2004, she has been with Microsoft Research
Asia, Beijing, China, where she is currently a lead
Researcher with the Internet Media Group. She
has authored or co-authored over 60 journal and
conference papers and ten proposals to standards.
Her current research interests include image and
video compression, image processing, computer
vision, and cloud computing. She was a recipient of the best paper award
of the IEEE T RANSACTIONS ON C IRCUITS AND S YSTEMS FOR V IDEO
T ECHNOLOGY in 2009.