Texture Texture What is texture?

3/17/2008
Texture
Texture
Chen Yu
Indiana University
Adapted from Efros
What is texture?
Texture Discrimination
• Something that repeats with variation.
• Must separate what repeats and what stays.
• Model as repeated trials of a random process.
– The probability distribution stays the same.
– But each trial is different.
1
3/17/2008
Juesz
• Two texture images will be perceived by
human observers to be the same if some
appropriate statistics of these images match.
• Two key tasks in texture computation
– Picking the right set of statistics to match.
– Finding an algorithm that matches them.
Texture Computation
Texture Analysis
• Texture Analysis
• Texture segmentation: segmenting an image
into regions of constant texture.
• Texture synthesis: construct large regions of
texture from small example images
• Key issue: representing texture
2
3/17/2008
Texture Segmentation
Texture Synthesis
• Let’s define texture as some visual patterns on
an 2-D plane which at some scale has a
stationary distribution.
• Given a finite sample from some texture (an
image), the goal is to synthesize other samples
from the same texture.
Texture Synthesis (Efros & Leung, 99)
The Challenge
• Texture analysis: how to capture
the essence of texture?
• Need to model the whole
spectrum: from repeated to
stochastic texture (without explicit
patterns).
• Almost all real-world textures lie
somewhere in between.
• This problem is at intersection of
vision, graphics, statistics, and
image compression
repeated
stochastic
Both?
3
3/17/2008
Motivation from Language
• [Shannon,’48] proposed a way to generate
English-looking text using N-grams:
The Berkeley Restaurant Project (BeRP)
– Assume a generalized Markov model
– Use a large text to compute probability
distributions of each letter given N-1 previous
letters
• precompute or sample randomly
– Starting from a seed repeatedly sample this
Markov chain to generate new letters
– One can use whole words instead of letters
too.
What do we learn about the language?
<start> I
.25
Want some
.04
<start> I’d
.06
Want Thai
.01
<start> Tell
.04
To eat
.26
<start> I’m
.02
To have
.14
I want
.32
To spend
.09
I would
.29
To be
.02
I don’t
.08
British food
.60
I have
.04
British restaurant
.15
Want to
.65
British cuisine
.01
Want a
.05
British lunch
.01
• What's being captured with ...
–
–
–
–
–
P(want | I) = .32
P(to | want) = .65
P(eat | to) = .26
P(food | Chinese) = .56
P(lunch | eat) = .055
• What about...
– P(I | I) = .0023
– P(I | want) = .0025
– P(I | food) = .013
4
3/17/2008
Mark V. Shaney (Bell Labs, featured in
Scientific American)
– P(I | I) = .0023 I I I I want
– P(I | want) = .0025 I want I want
– P(I | food) = .013 the kind of food I want is ...
• Results:
– “As I've commented before, really relating to
someone involves standing next to impossible.”
– "One morning I shot an elephant in my arms and
kissed him.”
– "I spent an interesting evening recently with a
grain of salt"
• Notice how well local structure is preserved!
– Now let’s try this in 2D...
Main issues
• How to define a unit of synthesis and its
context (n-gram) for texture.
• How to construct a probability distribution.
• How to linearize the synthesis process in 2D
General Idea
• The algorithm “grows” texture, pixel by pixel,
outwards from an initial seed.
• All previously synthesized pixels in a square
window around the current point p are used as
the context.
• Probability tables for the distribution of p given
all possible contexts may need to be built.
P(p|N(p))?
5
3/17/2008
Synthesizing One Pixel
Really Synthesizing One Pixel
SAMPLE
SAMPLE
p
Infinite sample image
finite sample image
p
Generated image
– Assuming Markov property, what is conditional probability
distribution of p, given the neighbourhood window?
– Instead of constructing a model, let’s directly search the input
image for all such neighbourhoods to produce a histogram for p
– To synthesize p, just pick one match at random
Generated image
– However, since our sample image is finite, an exact
neighbourhood match might not be present
– So we find the best match using SSD error (sum of squared
differences, weighted by a Gaussian to emphasize local
structure), and take all samples within some distance from
that match
Randomness Parameter
non-parametric
sampling
p
Input image
Synthesizing a pixel
• Growing is in “onion skin” order
– Within each “layer”, pixels with most neighbors are synthesized
first
– If no close match can be found, the pixel is not synthesized until
the end
• Using Gaussian-weighted SSD is very important
– to make sure the new pixel agrees with its closest neighbors
6
3/17/2008
More Synthesis Results
Constrained Synthesis
Increasing window size
Visual Comparison
Text Synthesis
Synthetic tilable
texture
[DeBonet, ‘97]
(Efros & Leung, ‘99)
7
3/17/2008
Image Extrapolation
Summary
• The algorithm
– Very simple
– Surprisingly good results
– Synthesis is easier than analysis!
– but very slow: a full search for the whole image on
every single synthesized pixel.
• Garber (1981) has proposed the almost identical
algorithm but couldn’t implement it.
block
An observation
Input texture
• During the synthesis process, most pixels have
their values totally determined by what has
been synthesized so far.
• A lot of searching work is wasted on pixels
that already know their fate.
• Image Quilting: A new method is to stitch
together patches of texture from the output
image.
B1
B2
Random placement
of blocks
B1
B2
Neighboring blocks
constrained by overlap
B1
B2
Minimal error
boundary cut
8
3/17/2008
The Philosophy
Algorithm
– Pick size of block and size of overlap
– Synthesize blocks in raster order
• The “Corrupt Professor’s Algorithm”:
– Plagiarize as much of the source image as you can
– Then try to cover up the evidence
• Rationale:
– Search input texture for block that satisfies overlap
constraints (above and left)
– Paste new block into resulting texture
– Texture blocks are by definition correct samples of
texture so problem only connecting them together
• use dynamic programming to compute minimal error
boundary cut
Minimal error boundary
overlapping blocks
Dynamic Programming
vertical boundary
4
6
2
6
1
4
2
F
s
_
overlap error
3
5
2
7
=
4
2
From S to F, 4 steps, two states in each step.
min. error boundary
9
3/17/2008
Dynamic Programming
Principle of Optimality (Bellman 1965)
• From any point on an optimal trajectory, the
remaining trajectory is optimal for the
corresponding problem initialed at that point.
4
4
2
2
F
3
7
2
• At each step we need to make a decision, up or down.
• The basic idea of principle optimality is that we proceed
backward and at each step, we find the best path at this
time, from the current state to the destination.
• At each step, we go back one more step back and deal
with the sub-problem based on previous solutions.
Step 2
X1(1)
X1(3)
4
4
6
X1(2)
4 2
4
X1(3)
4
4
2
F
s
4
2
F
s
3
5
X2(1)
Step 1: t = 3
7
X2(2)
J ( x1 (3),3) = 4
J ( x2 (3),3) = 3
2
3
X2(3)
4
1
6
1
6
4
5
Step 1
6
6
1
4
• Any intermediate point in the optimal path
must be the optimal point linking the optimal
partial paths before and after that point.
X1(2)
2
s
Problem 1: from 0 to T
Problem 2: from t1 to T
Problem 3: from 0 to t1
X1(1)
6
3
5
7
X2(1)
3
5 2
X2(2)
X2(3)
 J ( x1 (3),3) + D( x1 (2), x1 (3)) 

J ( x1 (2),2) = min
 J ( x2 (3),3) + D( x1 (2), x2 (3)) 
 J ( x1 (3),3) + D( x2 (2), x1 (3)) 

J ( x2 (2),2) = min
 J ( x2 (3),3) + D( x2 (2), x2 (3)) 
10
3/17/2008
Step 3
10
X1(1)
4
6
4
X1(2)
2
Step 4
X1(3)
4
10
4
1
6
4
2
3
5
7
X2(1)
5
3
2
X2(2)
X2(3)
 J ( x1 (2),2) + D ( x1 (1), x1 (2)) 

J ( x1 (1),1) = min
 J ( x2 (2),2) + D ( x1 (1), x2 (2)) 
 J ( x1 (2),2) + D( x2 (1), x1 (2)) 

J ( x2 (1),1) = min
 J ( x2 (2),2) + D( x2 (1), x2 (2)) 
More general case
X1(2)
6
4 2
X1(3)
4
4
1
6
F
s
8
X1(1)
4
4
2
F
s
3
5
8
7
3
5 2
X2(1)
X2(2)
X2(3)
 J ( x1 (2),2) + D ( x1 (1), x1 (2)) 

J ( x1 (1),1) = min
 J ( x2 (2),2) + D ( x1 (1), x2 (2)) 
 J ( x1 (2),2) + D( x2 (1), x1 (2)) 

J ( x2 (1),1) = min
 J ( x2 (2),2) + D( x2 (1), x2 (2)) 
Minimal error boundary
overlapping blocks
_
overlap error
vertical boundary
2
=
min. error boundary
11
3/17/2008
12
3/17/2008
Texture Analysis -- Simplest Texture
• Each pixel independent, identically distributed
(iid).
• Examples:
Texture Discrimination is then Statistics
• Two sets of samples.
• Do they come from the same random
process?
– Region of constant intensity.
– Gaussian noise pattern.
– Speckled pattern
Comparing histgrams
Simplest Texture Discrimination
Chi-square
• Compare histograms.
– Divide intensities into discrete ranges.
– Count how many pixels in each range.
i
j
k
0-25
26-50
51-75
76-100
}
}
0.1
0.8
225-250
13
3/17/2008
How/why to compare
• Simplest comparison is SSD, many others.
• Can view probabilistically.
– Histogram is a set of samples from a probability
distribution.
– With many samples it approximates distribution.
– Test probability samples drawn from same distribution.
Ie., is difference greater than expected when two samples
come from same distribution?
More Complex Discrimination
• Histogram comparison is very limiting
– Every pixel is independent.
– Everything happens at a tiny scale.
• Use output of filters.
Spots and Oriented Bars
(Malik and Perona)
14
3/17/2008
Filters with Different Scales
• Based
on the pixels with large magnitudes in the particular
filter response, we can determine the presence of strong
edges of certain orientation. We can also find spot patterns
from the responses of the first two filters.
• Filtering can be performed at different scales to find
patterns of different sizes. Here, the responses of the lowresolution version of the original image is shown.
Texture Matching
• What filters?
Spots and oriented bars at a variety of
different scales.
Details probably don’t matter.
• What statistics?
mean, standard deviation, various histograms.
15
3/17/2008
More examples
16