1 / 29

Janusz Starzyk, Yongtao Guo and Zhineng Zhu Ohio University, Athens, OH 45701, U.S.A.

SOLAR AND ITS HARDWARE DEVELOPMENT. Janusz Starzyk, Yongtao Guo and Zhineng Zhu Ohio University, Athens, OH 45701, U.S.A. 6 th International Conference on Computational Intelligence and Neural Computing Cary, NC, September 30 th , 2003. OUTLINE. Neural Networks

Download Presentation

Janusz Starzyk, Yongtao Guo and Zhineng Zhu Ohio University, Athens, OH 45701, U.S.A.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SOLAR AND ITS HARDWARE DEVELOPMENT Janusz Starzyk, Yongtao Guo and Zhineng Zhu Ohio University, Athens, OH 45701, U.S.A. 6th International Conference on Computational Intelligence and Neural Computing Cary, NC, September 30th, 2003

  2. OUTLINE • Neural Networks • Traditional Hardware Implementation • Principle of Self-Organizing Learning • Advantages & Simulation Algorithm • Hardware Architecture • Hardware/software Codesign • Routing and Interface • PCB SOLAR • Future Work • Conclusion

  3. information flow output input hidden Traditional ANN Hardware • Limited routing resource. • Quadratic relationship between the routing and the number of neuron makes classical ANNs wire dominated. Interconnect is 70% of chip area

  4. Dowling, 1998, p. 17 Cell body Biological Neural Networks From IFC’s webpage

  5. Self Organizing Learning Array SOLAR • What is SOLAR? New Biologically Inspired Learning Network Organization Basic Fabric: A fixed lattice of distributed, parallel processing units (neurons) Self-organization: • Neurons chose inputs adaptively from routing channels. • Neurons are adaptively self re-configured. • Neurons send output signals to the routing channels. • Number of neurons results automatically from problem complexity.

  6. Self Organizing Learning Array SOLAR-Organization • Neurons organized in a cell array • Sparse randomized connections • Local self-organization • Data driven • Entropy based learning • Regular structure • Suitable for large scale circuit implementation

  7. System clock Remoteneurons Other Neurons This neuron Nearestneighborneuron ID TCI Neuron’s Simulation Structure Neuron Inputs • System clock • Data input • Control input TCI • Information deficiency ID • Neuron Outputs • -Data output • -Control output • -Information deficiency

  8. Self-Organizing Process

  9. Self-organizing Principle Information index Neuron self-organizes by maximizing the information index

  10. Self-organizing Principle Information deficiency (helps to organize SOLAR learning) Output information deficiency. The learning array grows by adding more neurons until input information deficiency of a subsequent neuron falls below threshold

  11. Self-organizing Process Matlab Simulation Learning process Initial interconnection

  12. Method Miss Detection Probability Method Miss Detection Probability CAL5 .131 Naivebay .151 SOLAR & other Algorithms DIPOL92 .141 CASTLE .148 Training Data Logdisc .141 ALLOC80 .201 SMART .158 CART .145 C4.5 .155 NewID .181 SOLAR & other Classifiers (Simulation) IndCART .152 CN2 .204 Credit card approval data (ftp:cs.uci.edu) Bprop .154 LVQ .197 RBF .145 Quadisc .207 Baytree .171 Default .440 ITule .137 k-NN .181 AC2 .181 SOLAR .135 Software Simulation

  13. Structure of a single neuron • RPU: reconfigurable processing unit • CU: control unit • DPE: dynamic probability estimator • EBE: entropy based evaluator • DSRU: dynamic self-reconfiguration memory. • NI/NO: Data input/output • CI/CO: Control input/output

  14. Routing Structure • CSU:configurable switching unit • BRU: bidirectional routing unit

  15. Even number of inputs Odd number of inputs Configurable Switching Unit (CSU) CSU is used to realize flexible connections among neurons • Butterfly structure • CSU can take any number of inputs

  16. Configurable Switching Unit(cont’d) Random connections of neurons with branching ratio of 50% for 3*6 and 6*15 neurons array Routing resources used 62.7% Routing resources used 85.3%

  17. Configurable Switching Unit(cont’d) Random connections of 4*7 neurons array with branching ratio of 10% and 90% Branching Ratio of 10% Branching Ratio of 90%

  18. Virtex XCV800FPGA dynamic configuration PCI Bus Software run in PC JTAG Programming Hardware Board HW/SW Codesign Partition of System Co-simulation • Neuron’s architecture • System initialization, organization and management • Interface

  19. Software Model In Behavioural VHDL Hardware Model In Structural VHDL SW/HW Co-simulation • A software process • Written in behavioral VHDL • A hardware process • Written in RTL VHDL which is synthesizable • HW/SW communication • FSM and FIFOs

  20. Hardware Architecture

  21. Ctrl I/O matCloseDIMEBoard.dll matConfigDIMEBoard.dll matOpenDIMEBoard.dll … System Design Ctrl I/O API Data I/O API Hardware Access Function Sys Func Data I/O matDIME_DMARead.dll matDIME_DMAWrite.dll matviDIME_ReadRegister.dll matviDIME_WriteRegister.dll … PCI FUNC Kernel Driver PCI BUS Software Architecture

  22. PCB Design Single SOLAR PCB contains 2x2 VIRTEX XCV1000 chips

  23. SOLAR PCB Design Boards Interface Board SOLAR Board

  24. Neurons Prototyping Problem: Neurons need to be carefully placed - otherwise some resources are lost. Neurons memory needs to be optimized for best resource utilization.

  25. Future Work- System SOLAR

  26. SOLAR is different from traditional neural networks … • Expandable modular architecture • Dynamically reconfigurable hardware structure • Interconnection number grows linearly with the number of neurons • Data-driven self-organizing learning hardware • Learning and organization is based on local information

  27. Why to focus on networks of neurons? • Increases computational speed • Improves fault tolerance • Constraints us to use distributed solutions • Brain does it • www.ent.ohiou.edu/~starzyk

  28. Can we set milestones in developing intelligent networks of neurons? How to represent a distributed cognition? How to model machine will to learn and act? How to introduce association between patterns? How a machine shell implement temporal learning? How machine shell block repetitive information from being processed over and over again? How machine shell evaluate its state with respect to set objectives and plan its actions? How to implement elements of reinforcement learning in distributed networks?

  29. Questions

More Related