Learning in Recurrent Networks. Psychology 209 February 25 & 27, 2013. Outline. Back Propagation through time Alternatives that can teach networks to settle to fixed points Learning conditional distributions An application Collaboration of hippocampus & cortex in learning new associations.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Psychology 209February 25 & 27, 2013
Error at each unit is theinjected error (arrows) andthe back-propagated error;these are summed andscaled by deriv. of activationfunction to calculate deltas.
Update net inputs (h) until they stop changing according to(s(.) = logistic fcn):
Then update deltas (y) til they stop changingaccording to:
J represents the external error to theunit, if any.
Adjust weights using the delta rule
Assuming symmetric Simulation
Only activation is propagated.
Time difference of activationreflects error signal.
Maybe this is more biologicallyplausible that explicit backprop
Minus phase: Present input, feed activation
forward,computeoutput, let it feed back, letnetwork settle.
Plus phase: Then clamp both input and
output units into desired state, and let
network settle again.*
*equations neglect the componentto the net input at the hidden layerfrom the input layer.
P(a = 1) = logistic(net/T)
Dwij = (<ai+aj+>- <ai-aj->)
The contrastive SimulationHebbian learning rule minimizes:The sum, over different input patterns I:of the contrastive divergence or Information Gain between probability distributions over states softhe output unitsfor desired (plus) and obtained (minus) phases,conditional on the Input I
man:woman hungry:thin city:ostrich
Hippocampus time until it reaches an equilibrium distribution
Neo-CortexKwok & McClelland Model