html5-img
1 / 6

Deep Belief Networks and Restricted Boltzmann Machines

Deep Belief Networks and Restricted Boltzmann Machines. Restricted Boltzmann Machines. Visible and hidden units Each visible unit connected to all hidden units Undirected graph. UNITS STATE ACTIVATION. RBMs work by updating the states of some units given the states of others

tamira
Download Presentation

Deep Belief Networks and Restricted Boltzmann Machines

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Deep Belief Networks and Restricted Boltzmann Machines

  2. Restricted Boltzmann Machines • Visible and hidden units • Each visible unit connected to all hidden units • Undirected graph

  3. UNITS STATE ACTIVATION • RBMs work by updating the states of some units given the states of others • Compute the activation energy. • Then use logistic function on the energy to determine if unit will activate or not.

  4. pi will be close to 1 for positive activation energies and close to 0 for negative ones. • Same process is performed after a hidden unit state has been updated to update a visible unit.

  5. Learning weights • Take training example and set the states of visible units to this preferences. • Update the hidden units states using the logistic activation rule • For each pairs of units measure whether they are both on • Reconstruct the visible units by using the logistic activation rule • Update hidden units. • For each pair of units measure whether they are both off • Update all the weights • Repeat until the error between examples and representations reach a threshold or a max number of iterations is reached • By adding Positive(eij) and –Negative(eij) we are helping the network reach the reality of the training data. • If addition is zero, it means we have the desired weight

  6. Deep Belief Networks • These are build by stacking and training RBMs in a greedily manner. • DBN is a graphical model which learns to extract deep hierarchical representation of the data.

More Related