Neural networks
1 / 19

Neural Networks - PowerPoint PPT Presentation

  • Uploaded on

Neural Networks. William Lai Chris Rowlett. What are Neural Networks?. A type of program that is completely different from functional programming. Consists of units that carry out simple computations linked together to perform a function

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'Neural Networks' - kachina

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Neural networks

Neural Networks

William Lai

Chris Rowlett

What are neural networks
What are Neural Networks?

  • A type of program that is completely different from functional programming.

  • Consists of units that carry out simple computations linked together to perform a function

  • Modeled after the decision making process of the biological network of neurons in the brain

The biology of neural networks
The Biology of Neural Networks

  • Neural Networks are models of neuron clusters in the brain

    • Each Neuron has a:

      • Dendrites

      • Axon

      • Terminal buds

      • Synapse

    • Action potential is passed down the axon, which causes the release of neurotransmitters

Types of neural networks general
Types of Neural Networks:General

  • Supervised

    • During training, error is determined by subtracting output from actual value

  • Unsupervised

    • Nothing is known of results

    • Used to classify complicated data

  • Nonlearning

    • Optimization

Types of neural networks specific
Types of Neural Networks:Specific

  • Perceptrons

    • A subset of feed-forward networks, containing only one input layer, one output layer, and each input unit links to only output units

  • Feed-forward networks

    • a.k.a. Directed Acyclic Graphs

    • Each unit only links to units in subsequent layers

    • Allows for hidden layers

  • Recurrent networks

    • Not very well understood

    • Units can link to units in the same layer or even previous layers

    • Example: The Brain

Neural net capabilities
Neural Net Capabilities

  • Neural Nets can do anything a normal digital computer can do (such as perform basic or complex computations)

  • Functional Approximations/Mapping

  • Classification

  • Good at ignoring ‘noise’

Neural net limitations
Neural Net Limitations

  • Problems similar to Y=1/X between (0,1) on the open interval

  • (Pseudo)-random number predictors

  • Factoring integers or determining prime numbers

  • Decryption

History of neural networks
History of Neural Networks

  • McColloch and Pitts (1943)

    • Co-wrote first paper on possible model for a neuron

  • Widrow Hoff (1959)

    • Developed MADALINE and ADALINE

    • MADALINE was the first neural network to try to solve a real world problem

      • Eliminates echo in phone lines

  • vonNeumann architecture took over for about 20 years (60’s-80’s)

Early applications
Early Applications

  • Checkers (Samuel, 1952)

    • At first, played very poorly as a novice

    • With practice games, eventually beat its author

  • ADALINE (Widrow and Hoff, 1959)

    • Recognizes binary patterns in streaming data

  • MADALINE (same)

    • Multiple ADAptive LINear Elements

    • Uses an adaptive filter that eliminates echoes on phone lines

Modern practical applications
Modern Practical Applications

  • Pattern recognition, including

    • Handwriting Deciphering

    • Voice Understanding

    • “Predictability of High-Dissipation Auroral Activity”

  • Image analysis

    • Finding tanks hiding in trees (cheating)

    • Material Classification

  • "A real-time system for the characterization of sheep feeding phases from acoustic signals of jaw sounds"

How do neural networks relate to artificial intelligence
How Do Neural Networks Relate to Artificial Intelligence?

  • Neural networks are usually geared towards some application, so they represent the practical action aspect of AI

  • Since neural networks are modeled after human brains, they are an imitation of human action. However, than can be taught to act rationally instead.

  • Neural networks can modify their own weights and learn.

The future of neural networks
The Future of Neural Networks

  • Pulsed neural networks

  • The AI behind a good Go playing agent

  • Increased speed through the making of chips

  • robots that can see, feel, and predict the world around them

  • improved stock prediction

  • common usage of self-driving cars

  • Applications involving the Human Genome

  • Project self-diagnosis of medical problems using neural networks

Past difficulties
Past Difficulties

  • Single-layer approach limited applications

  • Converting Widrow-Hoff Technique for use with multiple layers

  • Use of poorly chosen and derived learning function

  • High expectations and early failures led to loss of funding

Recurring difficulties
Recurring Difficulties

  • Cheating

    • Exactly what a neural net is doing to get its solutions is unknown and therefore, it can cheat to find the solution as opposed to find a reliable algorithm

  • Memorization

  • Overfitting without generalization

Describing neural net units
Describing Neural Net Units

  • All units have input values, aj

  • All input values are weighted, as in each aj is multiplied by the link’s weight, Wj,i

  • All weighted inputs are summed, generating ini

  • The unit’s activation function is called on ini, generating the activation value ai

  • The activation value is output to every destination of the current unit’s links.





  • Single layer neural networks

  • Require linearly separable functions

  • Guarantees the one solution

Back propagation

  • Back-propagation uses a special function to divide the error of the outputs to all the weights of the network

  • The result is a slow-learning method for solving many real world problems

Organic vs artificial
Organic vs. Artificial

  • Computer cycle times are in the order of nanoseconds while neurons take milliseconds

  • Computers compute the results of each neuron sequentially, while all neurons in the brain fire simultaneously every cycle

  • Result: massive parallelism makes brains a billion times faster than computers, even though computer bits can cycle a million times faster than neurons