- 60 Views
- Uploaded on
- Presentation posted in: General

Neural Networks

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Neural Networks

William Lai

Chris Rowlett

- A type of program that is completely different from functional programming.
- Consists of units that carry out simple computations linked together to perform a function
- Modeled after the decision making process of the biological network of neurons in the brain

- Neural Networks are models of neuron clusters in the brain
- Each Neuron has a:
- Dendrites
- Axon
- Terminal buds
- Synapse

- Action potential is passed down the axon, which causes the release of neurotransmitters

- Each Neuron has a:

- Supervised
- During training, error is determined by subtracting output from actual value

- Unsupervised
- Nothing is known of results
- Used to classify complicated data

- Nonlearning
- Optimization

- Perceptrons
- A subset of feed-forward networks, containing only one input layer, one output layer, and each input unit links to only output units

- Feed-forward networks
- a.k.a. Directed Acyclic Graphs
- Each unit only links to units in subsequent layers
- Allows for hidden layers

- Recurrent networks
- Not very well understood
- Units can link to units in the same layer or even previous layers
- Example: The Brain

- Neural Nets can do anything a normal digital computer can do (such as perform basic or complex computations)
- Functional Approximations/Mapping
- Classification
- Good at ignoring ‘noise’

- Problems similar to Y=1/X between (0,1) on the open interval
- (Pseudo)-random number predictors
- Factoring integers or determining prime numbers
- Decryption

- McColloch and Pitts (1943)
- Co-wrote first paper on possible model for a neuron

- Widrow Hoff (1959)
- Developed MADALINE and ADALINE
- MADALINE was the first neural network to try to solve a real world problem
- Eliminates echo in phone lines

- vonNeumann architecture took over for about 20 years (60’s-80’s)

- Checkers (Samuel, 1952)
- At first, played very poorly as a novice
- With practice games, eventually beat its author

- ADALINE (Widrow and Hoff, 1959)
- Recognizes binary patterns in streaming data

- MADALINE (same)
- Multiple ADAptive LINear Elements
- Uses an adaptive filter that eliminates echoes on phone lines

- Pattern recognition, including
- Handwriting Deciphering
- Voice Understanding
- “Predictability of High-Dissipation Auroral Activity”

- Image analysis
- Finding tanks hiding in trees (cheating)
- Material Classification

- "A real-time system for the characterization of sheep feeding phases from acoustic signals of jaw sounds"

- Neural networks are usually geared towards some application, so they represent the practical action aspect of AI
- Since neural networks are modeled after human brains, they are an imitation of human action. However, than can be taught to act rationally instead.
- Neural networks can modify their own weights and learn.

- Pulsed neural networks
- The AI behind a good Go playing agent
- Increased speed through the making of chips
- robots that can see, feel, and predict the world around them
- improved stock prediction
- common usage of self-driving cars
- Applications involving the Human Genome
- Project self-diagnosis of medical problems using neural networks

- Single-layer approach limited applications
- Converting Widrow-Hoff Technique for use with multiple layers
- Use of poorly chosen and derived learning function
- High expectations and early failures led to loss of funding

- Cheating
- Exactly what a neural net is doing to get its solutions is unknown and therefore, it can cheat to find the solution as opposed to find a reliable algorithm

- Memorization
- Overfitting without generalization

- All units have input values, aj
- All input values are weighted, as in each aj is multiplied by the link’s weight, Wj,i
- All weighted inputs are summed, generating ini
- The unit’s activation function is called on ini, generating the activation value ai
- The activation value is output to every destination of the current unit’s links.

OR

XOR

- Single layer neural networks
- Require linearly separable functions
- Guarantees the one solution

- Back-propagation uses a special function to divide the error of the outputs to all the weights of the network
- The result is a slow-learning method for solving many real world problems

- Computer cycle times are in the order of nanoseconds while neurons take milliseconds
- Computers compute the results of each neuron sequentially, while all neurons in the brain fire simultaneously every cycle
- Result: massive parallelism makes brains a billion times faster than computers, even though computer bits can cycle a million times faster than neurons