In the name of god
This presentation is the property of its rightful owner.
Sponsored Links
1 / 25

In the name of god PowerPoint PPT Presentation


  • 38 Views
  • Uploaded on
  • Presentation posted in: General

In the name of god. Autoencoders Mostafa Heidarpour. Autoencoders. An auto-encoder is an artificial neural network used for learning efficient codings The aim of an auto-encoder is to learn a compressed representation (encoding) for a set of data

Download Presentation

In the name of god

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


In the name of god

In the name of god

Autoencoders

MostafaHeidarpour


Autoencoders

Autoencoders

  • An auto-encoder is an artificial neural network used for learning efficient codings

  • The aim of an auto-encoder is to learn a compressed representation (encoding) for a set of data

  • This means it is being used for dimensionality reduction


Autoencoders1

Autoencoders

  • Auto-encoders use three or more layers:

    • An input layer. For example, in a face recognition task, the neurons in the input layer could map to pixels in the photograph.

    • A number of considerably smaller hidden layers, which will form the encoding.

    • An output layer, where each neuron has the same meaning as in the input layer.


Autoencoders2

Autoencoders


Autoencoders3

Autoencoders

  • Encoder

    Where h is feature vector or representation or code computed from x

  • Decoder

    maps from feature space back into input space, producing a reconstruction

    attempting to incur the lowest possible reconstruction error

    Good generalization means low reconstruction error at test examples, while having high reconstruction error for most other x configurations


Autoencoders4

Autoencoders


Autoencoders5

Autoencoders


Autoencoders6

Autoencoders

  • In summary, basic autoencoder training consists in finding a value of parameter vector minimizing reconstruction error:

  • This minimization is usually carried out by stochastic gradient descent


Regularized autoencoders

regularized autoencoders

To capture the structure of the data-generating distribution, it is therefore important that something in the training criterion or the parameterization prevents the autoencoder from learning the identity function, which has zero reconstruction error everywhere. This is achieved through various means in the different forms of autoencoders, we call these regularized autoencoders.


Autoencoders7

Autoencoders

  • Denoising Auto-encoders (DAE)

    • learning to reconstruct the clean input from a corrupted version.

  • Contractive auto-encoders (CAE)

    • robustness to small perturbations around the training points

    • reduce the number of effective degrees of freedom of the representation (around each point)

    • making the derivative of the encoder small (saturate hidden units)

  • Sparse Autoencoders

    • Sparsity in the representation can be achieved by penalizing the hidden unit biases or by directly penalizing the output of the hidden unit activations


  • Example

    ورودی

    خروجی

    Example

    10000000

    01000000

    00100000

    00010000

    00001000

    00000100

    00000010

    00000001

    10000000

    01000000

    00100000

    00010000

    00001000

    00000100

    00000010

    00000001

    Hidden nodes


    Example1

    Example

    • net=fitnet([3]);


    Example2

    Example

    • net=fitnet([8 3 8]);


    Example3

    Example


    Introduction

    Introduction

    • the auto-encoder network has not been utilized for clustering tasks

    • To make it suitable for clustering, proposed a new objective function embedded into the auto-encoder model


    Proposed model

    Proposed Model


    Proposed model1

    Proposed Model

    • Suppose one-layer auto-encoder network as an example (minimizing the reconstruction error)

    • Embed objective function:


    Proposed algorithm

    Proposed Algorithm


    Experiments

    Experiments

    • All algorithms are tested on 3 databases:

      • MNIST contains 60,000 handwritten digits images (0∼9) with the resolution of 28 × 28.

      • USPS consists of 4,649 handwritten digits images (0∼9) with the resolution of 16 × 16.

      • YaleB is composed of 5,850 faces image over ten categories, and each image has 1200 pixels.

    • Model: a four-layers auto-encoder network with the structure of 1000-250-50-10.


    Experiments1

    Experiments

    • Baseline Algorithms: Compare with three classic and widely used clustering algorithms

      • K-means

      • Spectral clustering

      • N-cut

  • Evaluation Criterion

    • Accuracy (ACC)

    • Normalized mutual information (NMI)


  • Quantitative results

    Quantitative Results


    Visualization

    Visualization


    Difference of spaces

    Difference of Spaces


    Thanks for attention

    Thanks for attention

    Any question ?


  • Login