1 / 45

2806 Neural Computation Introduction Lecture 1

2806 Neural Computation Introduction Lecture 1. 2005 Ari Visa. Agenda. Some historical notes Biological background What neural networks are? Properties of neural network Compositions of neural network Relation to artificial intelligence . Overview .

chul
Download Presentation

2806 Neural Computation Introduction Lecture 1

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 2806 Neural Computation Introduction Lecture 1 2005 Ari Visa

  2. Agenda • Some historical notes • Biological background • What neural networks are? • Properties of neural network • Compositions of neural network • Relation to artificial intelligence

  3. Overview The human brain computes in an entirely different way from the conventional digital computer. The brain routinely accomplishes perceptual recognition in approximately 100-200 ms. How does a human brain do it?

  4. Nonlinearity Input-Output Mapping Adaptivity Evidential Response Contextual Information Fault Tolerance VLSI Implementability Uniformity of Analysis and Design Neurobiological Analogy Some Expected Benefits

  5. Definition • A neural network is a massive parallel distributed processor made up of simple processing units, which has a natural propensity for storing experimental knowledge and making it available for use. It resembles the brain in two respects:

  6. Definition • 1) Knowledge is acquired by the network from its environment through a learning process. • 2) Interneuron connection strengths, known as synaptic weights, are used to store the acquired knowledge.

  7. Some historical notes Lot of activities concerning automatas, communication, computation, understanding of nervous system during 1930s and 1940s McCulloch and Pitts 1943 von Neumann EDVAC (Electronic Discrete Variable Automatic Computer) Hebb: The Organization of Behavior, 1949

  8. Some historical notes

  9. Some historical notes • Minsky: Theory of Neural-Analog Reinforcement Systems and Its Application to the Brain-Model Problem, 1954 • Gabor: Nonlinear adaptive filter, 1954 • Uttley: leaky integrate and fire neuron, 1956 • Rosenblatt: the perceptron, 1958

  10. Biological Background • The human nervous system may be viewed as a three stage system (Arbib 1987): The brain continually receives information, perceives it, and makes appropriate decisions.

  11. Biological Background • Axons = the transmission lines • Dendrites = the receptive zones • Action potentials, spikes originate at the cell body of neurons and then propagateacross the individual neurons at constant velocity and amplitude.

  12. Biological Background • Synapses are elementary structural and functional units that mediate the interactions between neurons. • Excitation or inhibition

  13. Biological Background • Note, that the structural levels of organization are a unique characteristic of the brain

  14. Biological Background

  15. Properties of Neural Network • A model of a neuron: • synapses (=connecting links) • adder (=a linear combiner) • an activation function

  16. Properties of Neural Network • Another formulation of a neuron model

  17. Properties of Neural Network • Types of Activation Function: • Threshold Function • Piecewise-Linear Function • Sigmoid Function (signum fuction or hyperbolic tangent function)

  18. Properties of Neural Network • Stochastic Model of a Neuron The activation function of the McCulloch-Pitts model is given a probabilistic interpretation, aneuron is permitted to reside in only one of two states: +1 or –1. The decision for a neuron to fire is probabilistic. A standard choice for P(v) is the sigmoid-shaped function = 1/(1+exp(-v/T)), where T is a pseudotemperature.

  19. Properties of Neural Network • The model of an artificial neuron may also be represented as a signal-flow graph. • A signal-flow graph is a network of directed links that are interconnected at certain points called nodes. A typical node j has an associated node signal xj. A typical directed link originates at node j and terminates on node k. It has an associated transfer function (transmittance) that specifies the manner in which the signal yk at node k depends on the signal xj at node j.

  20. Properties of Neural Network • Rule 1: A signal flows along a link in the direction defined by the arrow • Synaptic links (a linear input-output relation, 1.19a) • Activation links (a nonlinear input-output relation, 1.19b)

  21. Properties of Neural Network • Rule 2: A node signal equals the algebraic sum of all signals entering the pertinent node via the incoming links (1.19c)

  22. Properties of Neural Network Rule 3: The signal at a node is transmitted to each outgoing link originating from that node, with the transmission being entirely independent of the transfer functions of the outgoing links, synaptic divergence or fan-out (1.9d)

  23. Properties of Neural Network • A neural network is a directed graph consisting of nodes with interconnecting synaptic and activation links, and is characterized by four properties: • 1. Each neuron is represented by a set of linear synaptic links, an externally applied bias, and a possibly nonlinear activation link, This bias is represented by a synaptic link connected to an input fixed at +1.

  24. Properties of Neural Network • 2. The synaptic links of a neuron weight their respective input signals. • 3. The weighted sum of the input signals defines the induced local field of a neuron in question. • 4. The activation link squashes the induced local field of the neuron to produce an output.

  25. Properties of Neural Network • Complete graph • Partially complete graph = architectural graph

  26. Properties of Neural Network • Feedback is said to exist in a dynamic system whenever the output of an element in the system influences in part the input applied to the particular element, thereby giving rise to one or more closed paths for the transmission of signals around the system (1.12)

  27. Properties of Neural Network • yk(n) = A[x’j(n)] • x’j(n) = xj(n)+B[yk(n)] • yk(n)=A/(1-AB)[xj(n)] • the closed-loop operator A/(1-AB) • the open-loop operator AB In general AB ¹BA

  28. Properties of Neural Network • A/(1-AB) • w/1-wz-1) • yk(n) is convergent (=stable), if |w| < 1 (1.14a) • yk(n) is divergent (=unstable), if |w| < 1

  29. Properties of Neural Network • A/(1-AB) • w/1-wz-1) • yk(n) is convergent (=stable), if |w| < 1 (1.14a) • yk(n) is divergent (=unstable), if |w| ³ 1, if |w| = 1 the divergence is linear (1.14.b) if |w| >1 the divergence is exponential (1.14c)

  30. Compositions of Neural Network • The manner in which the neurons of a neural network are structured is intimately linked with the learning algorithm used to train the network. • Single-Layer Feedforward Networks

  31. Compositions of Neural Network • Multilayer Feedforward Networks (1.16) • Hidden layers, hidden neurons or hidden units -> enabled to extract higher-order statistics

  32. Compositions of Neural Network • Recurrent Neural Network (1.17) • It has at least one feedback loop.

  33. Knowledge Representation • Knowledge refers to stored information or models used by a person or machine to interpret, predict, and appropriately respond to the outside world (Fishler and Firschein, 1987) • A major task for neural network is to learn a model of the world

  34. Knowledge Representation • Knowledge of the world consists of two kind of information • 1) The known world state, prior information • 2) Observations of the world, obtained by means of sensor. • Obtained observations provide a pool of information from which the examples used to train the neural network are drawn.

  35. Knowledge Representation • The examples can be labelled or unlabelled. • In labelled examples, each example representing an input signal is paired with a corresponding desired response. Note, both positive and negative examples are possible. • A set of input-output pairs, with each pair consisting of an input signal and the corresponding desired response, is referred to as a set of training data or training sample.

  36. Knowledge Representation • Selection of an appropriate architecture • A subset of examples is used to train the network by means of a suitable algorithm (=learning). • The performance of the trained network is tested with data not seen before (=testing). • Generalization

  37. Knowledge Representation • Rule 1: Similar inputs from similar classes should usually produce similar representations inside the network, and should therefore be classified as belonging to the same category. • Rule 2: Items to be categorized as separate classes should be given widely different representations in the network.

  38. Knowledge Representation • Rule 3: If a particular feature is important, then there should be a large number of neurons involved in the representation of that item in the network • Rule 4: Prior information and invariances should be built into the design of a neural network, thereby simplifying the network design by not having to learn them.

  39. Knowledge Representation • How to Build Prior Information Into Neural Network Design? • 1) Restricting the network architecture through the use of local connections known as receptive fields. • 2) Constraining the choice of synaptic weights through the use of weight-sharing.

  40. Knowledge Representation • How to Build Invariances into Neural Network Design? • 1) Invariance by Structure • 2) Invariance by Training • 3) Invariant Feature Space

  41. Relation to Artificial Intelligence • The goal of artificial intelligence (AI) is the development of paradigms or algorithms that require machines to perform cognitive tasks (Sage 1990). • An AI system must be capable of doing three things: • 1) Store knowledge • 2) Apply the knowledge stored to solve problems. • 3) Acquire new knowledge through experience.

  42. Relation to Artificial Intelligence • Representation: The use of a language of symbol structures to represent both general knowledge about a problem domain of interest and specific knowledge about the solution to the problem Declarative knowledge Procedural knowledge

  43. Relation to Artificial Intelligence • Reasoning: The ability to solve problems • The system must be able to express and solve a broad range of problems and problem types. • The system must be able to make explicit and implicit information known to it. • The system must have control mechanism that determines which operations to apply to a particular problem.

  44. Relation to Artificial Intelligence • Learning: The environment supplies some information to a learning element. The learning element then uses this information to make improvements in a knowledge base, and finally the performance element uses the knowledge base to perform its task.

  45. Summary • A major task for neural network is to learn a model of the world • It is not a totally new approach but it has differences to AI, matematical modeling, Pattern Recognition and so on.

More Related