auto-encoder

In the name of god
Autoencoders
Mostafa Heidarpour
1
Autoencoders
• An auto-encoder is an artificial neural
network used for learning efficient codings
• The aim of an auto-encoder is to learn a
compressed representation (encoding) for a
set of data
• This means it is being used for dimensionality
reduction
2
Autoencoders
• Auto-encoders use three or more layers:
– An input layer. For example, in a face recognition
task, the neurons in the input layer could map to
pixels in the photograph.
– A number of considerably smaller hidden layers,
which will form the encoding.
– An output layer, where each neuron has the same
meaning as in the input layer.
3
Autoencoders
4
Autoencoders
• Encoder
Where h is feature vector or representation or code computed from x
• Decoder
maps from feature space back into input space, producing a reconstruction
attempting to incur the lowest possible reconstruction error
Good generalization means low reconstruction error at test examples, while
having high reconstruction error for most other x configurations
5
Autoencoders
6
Autoencoders
7
Autoencoders
• In summary, basic autoencoder training
consists in finding a value of parameter vector
minimizing reconstruction error:
• This minimization is usually carried out by
stochastic gradient descent
8
regularized autoencoders
To capture the structure of the data-generating
distribution, it is therefore important that
something in the training criterion or the
parameterization prevents the autoencoder
from learning the identity function, which has
zero reconstruction error everywhere. This is
achieved through various means in the
different forms of autoencoders, we call these
regularized autoencoders.
9
Autoencoders
• Denoising Auto-encoders (DAE)
• learning to reconstruct the clean input from a corrupted
version.
• Contractive auto-encoders (CAE)
• robustness to small perturbations around the training points
• reduce the number of effective degrees of freedom of the
representation (around each point)
• making the derivative of the encoder small (saturate hidden units)
• Sparse Autoencoders
• Sparsity in the representation can be achieved by penalizing the
hidden unit biases or by directly penalizing the output of the
hidden unit activations
10
Example
10000000
01000000
00100000
00010000
00001000
00000100
00000010
00000001
11
‫ورودی‬
‫خروجی‬
Hidden nodes
10000000
01000000
00100000
00010000
00001000
00000100
00000010
00000001
Example
• net=fitnet([3]);
12
Example
• net=fitnet([8 3 8]);
13
Example
14
15
Introduction
• the auto-encoder network has not been
utilized for clustering tasks
• To make it suitable for clustering, proposed a
new objective function embedded into the
auto-encoder model
16
Proposed Model
17
Proposed Model
• Suppose one-layer auto-encoder network as
an example (minimizing the reconstruction error)
• Embed objective function:
18
Proposed Algorithm
19
Experiments
• All algorithms are tested on 3 databases:
– MNIST contains 60,000 handwritten digits images
(0∼9) with the resolution of 28 × 28.
– USPS consists of 4,649 handwritten digits images
(0∼9) with the resolution of 16 × 16.
– YaleB is composed of 5,850 faces image over ten
categories, and each image has 1200 pixels.
• Model: a four-layers auto-encoder network
with the structure of 1000-250-50-10.
20
Experiments
• Baseline Algorithms: Compare with three
classic and widely used clustering algorithms
• K-means
• Spectral clustering
• N-cut
• Evaluation Criterion
• Accuracy (ACC)
• Normalized mutual information (NMI)
21
Quantitative Results
22
Visualization
23
Difference of Spaces
24
Thanks for attention
Any question ?
25