1 / 39

Computers - Using your Brains

Computers - Using your Brains. Jim Austin Professor of Neural Computation. Pentium III. So how complex is it ? 10 12 neurons … 1,000,000,000,000 1000 connections between neurons. One brain can hold ... 1,000,000,000,000,000 numbers !. What do 10 12 neurons look like ?.

sveta
Download Presentation

Computers - Using your Brains

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computers - Using your Brains Jim Austin Professor of Neural Computation

  2. Pentium III

  3. So how complex is it ? • 1012 neurons … 1,000,000,000,000 • 1000 connections between neurons. • One brain can hold ... 1,000,000,000,000,000 numbers!

  4. What do 1012 neurons look like ? • 1600 times Population of the world (6,100,000,000) • 78,125 times the complexity of the Pentium III • Equal to the number of stars in our galaxy 4 Meters 4 Meters 4 Meters Sand

  5. What are computers good at ? • Adding up fast • Storing data - numbers and facts • Pushing data around • What are computers bad at ? • Being reliable • Finding information - knowledge • Doing very complex things - recognizing images • Learning to do the job them selves! The good and the bad

  6. Why are computers so restricted ? ACE

  7. Leo - for stock control

  8. Colossus - for breaking codes

  9. Pegusus - for scientific work

  10. Neurons verses Gates Input 1 Output Input 2 NAND Gate Boolean Logic - both inputs OK, output not OK

  11. = = Gates - NAND ALL inputs to be OK for output to be NOT OK Output Input 1 Input 2 =

  12. Evolution ? Should have picked a NAND gate for the brain...

  13. Neuron Output = threshold (input A x weight A + input B x weight B) A + Inputs Output B “Weights” Threshold logic - threshold 1 - one or more inputs OK  output OK

  14. = = Neuron At least one OK for output to be OK = At least three OK’s for output to be OK

  15. Can also alter connections/importance of inputs using the weights on the inputs Weights 1 0 1 1 + 3.5 0.5 1 1

  16. Why did this difference develop ? • “The analysis of the operation of a machine using two indication elements and signals can be conveniently be expressed in terms of a diagrammatic notation introduced, in this context, by Von Neuman and extended by Turing. This was adopted from a notation used by Pits and McCulloch as a possible way of analyzing the operation of the nervous system,…” Calculating Instruments & Machines, D Hartree, 1950, Cambridge University Press. • Probably dropped due to the development of the silicon chip • simpler to build Boolean logic gates rather than neuron units.

  17. n z k inputs Functional elements. Threshold n gate k n 1 z Excitation, “OR” 2 z Excitation, “AND”

  18. ICT Orion Computer • Used ‘Neuron’ logic - 1962

  19. Learning ! Learning at neuron level = Adjustment of which inputs are important Conventional computers have no implicit learning ability

  20. Spot the difference

  21. + Happy + Hungry Threshold = 2

  22. + Happy + Hungry

  23. + Happy + hungry

  24. Can we build useful systems with neurons ? Better tolerance to failure Parallelism/use of threshold logic/distributed memory Faster operation Massive parallelism Better access to uncertain information Threshold logic/neurons Where the inputs are uncertain Threshold logic/neurons. Where we want low power Asynchronous systems Adaptability Use of weights and learning methods.

  25. So what have we done with these ? Cortex-1 28 Processor cards, each holding 128 hardware neurons. Each with 1,000,000,000 weights. 16MHz. PCI based card.

  26. Complete Machine: 400,000,000 neuron evaluations per second 28,000 inputs 30 bits set on input 1,000,000 neurons.

  27. Cortex-1 node 5,120,000,000 neuron weights, 640 neurons.

  28. Recognising Addresses for the Post Office

  29. Recognising trademarks

  30. Text search engines • Tolerant to spelling errors. • Finds similar words to those supplied, for example chair, seat, bench. • Learns these similarities automatically from text. • Uses neural engine for document storage. • Estimated 400,000,000 documents searched per second.

  31. Molecular Databases • One of few systems that deal with the full 3D molecule

  32. Query Good matches Bad Match

  33. Thanks... Aaron Turner Mick Turner Vicky Hodge Julian Young Anthony Moulds Zyg Ulanowski Ken Lees Michael Weeks Sujeewa Alwis John Kennedy David Lomas and many others …. (It’s Brains from Thunderbirds !)

More Related