1 / 13

Connectionism

Connectionism. What is connectionism?. Also known as: Parallel distributed processing (PDP) Artificial neural networks (ANN) or just “neural networks” An alternative to symbolic representation No language of thought An attempt to model neural processes in the brain:

cheung
Download Presentation

Connectionism

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Connectionism

  2. What is connectionism? • Also known as: • Parallel distributed processing (PDP) • Artificial neural networks (ANN) or just “neural networks” • An alternative to symbolic representation • No language of thought • An attempt to model neural processes in the brain: • Parallel processing of information • Network of connections between neurons/units

  3. Mental processes explained as computations carried out by interconnected networks of simple units • A network is an input-output system • Each unit is an input-output unit

  4. Connectionist models consist of four parts – • Units • Activations • Connections • Connection weights Units are connected to each other in a network. In response to input, a unit is activated, sending signals to other units that it is connected with. The strength of those signals is determined by the connection weights between the connected units. Signals sent from one unit to another can be either excitatory or inhibitory. .5 unit connection unit connection weight

  5. Units and connections are arranged in input-output layers, usually with one or more hidden layer.

  6. Close-up of one unit a1, a2, and a3 represent the connection weights of the input it receives from other units. The three ‘aj’s represent the connection weights of its output to other units.

  7. A unit is activated when the combined weights of its input exceed a set level. E.g. cat recognition Input threshold: .8 Output strength: .5 .2 Meows Fur Pointed ears Whiskers .5 .3 .3 .4 .5 Input threshold: .9 Output: “it’s a cat” .4 Note: Every connection has a weight, but only a few of the weights are shown for simplicity.

  8. A tribute to interactive activation at: http://srsc.ulb.ac.be/pdp/iac/IAC.html Built in 1981. Demonstration of a neural network illustrating an artificial network that exhibits many properties of human memory.

  9. This animated network represents information about two gangs: the Jets and the Sharks. The central pool of units represents members of the gangs (e.g. Sam, Art, etc.) The surrounding pools represent characteristics of these members, e.g. the names (“Sam”, “Art”, etc.), age, occupation, marital status, gang affiliation and educational level.Within most pools, units are connected with inhibitory weights, showing that they are mutually exclusive: if x is married, x is not single; if x is named “Art”, x is not named “Steve”, etc.

  10. Things to try with the Sharks and Jets network: • Find the characteristics of one particular member • Identify a member by certain characteristics (e.g. who is a Shark in his 20s?) • Identify general characteristics of members of a gang, or members with a certain characteristic (e.g. what characteristics are common to bookies in this group)? How does this compare to how memory works?

  11. Network training Connection weights determine a network’s functioning. Connection weights either “hand-coded” or built up during training 1) Hand-coded – connection weights set manually by the network builder e.g. Sharks and Jets network is hand-coded

  12. 2)Connection weights built up through training Networks “learn”: Connection weights often set at random before training Networks are trained via back propagation Responses of the network are judged right or wrong (the network is “rewarded” or “punished”) When the output is judged correct, excitatory connections are strengthened, while inhibitory connections are weakened. When output is judged incorrect, excitatory connections are weakened, while inhibitory connections are strengthened. Training is slow. Needs a lot of feedback.

  13. What connectionist networks can learn to do • Mine/rock discrimination • NETtalk: http://dli.iiit.ac.in/ijcai/IJCAI-87-VOL1/PDF/066.pdf • Forming past tenses of English verbs • Face recognition (men vs. women, wearing sunglasses)

More Related