1 / 21

Genetic Specification of Recurrent Neural Networks: Initial Thoughts

Genetic Specification of Recurrent Neural Networks: Initial Thoughts. World Congress on Computational Intelligence 2006, Vancouver Bill Howell, Natural Resources Canada, Ottawa. I. Introduction – Genes and ANNs. Historically:

zaza
Download Presentation

Genetic Specification of Recurrent Neural Networks: Initial Thoughts

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Genetic Specification of Recurrent Neural Networks: Initial Thoughts World Congress on Computational Intelligence 2006, Vancouver Bill Howell, Natural Resources Canada, Ottawa

  2. I. Introduction – Genes and ANNs Historically: • Biological inspiration of artificial neural networks, right from the beginning • Ongoing mutual influence Outline: I. Introduction – Genes and ANNs II. Inspiration for DNA-ANNs III. What might we hope to achieve with DNA-ANNs? IV. Recommendations and Star-Gazing V. Conclusions WCCI06 Vancouver, 27Jul06, Bill Howell slide 2 of 21

  3. I. Introduction – Genes and ANNsComputational Neuro-Genetic Modelling • N. Kasabov, L. Benuskova, S. Wysoski, IJCNN04 & 05 Modeling gene networks for spiking neurons • R. Storjohann & G. Marcus, IJCNN05 “Neurogene – Integrated simulation of gene regulation, neural activity and neurodevelopment” • J.P. Thivierge & G. Marcus WCCI06 Genetics, growth, & environment WCCI06 Vancouver, 27Jul06, Bill Howell slide 3 of 21

  4. I. Introduction – Genes and ANNsANN Challenges • Better, much faster learning algorithms • initial specification and evolution of complex architectures • plasticity versus memory • robustness versus optimality • Pre-loading: • data -> functions -> knowledge -> behaviours • responses of: virus, bacteria, microbes, plants • instinct of: animals, man WCCI06 Vancouver, 27Jul06, Bill Howell slide 4 of 21

  5. I. Introduction – Genes and ANNs Recurrent Neural Nets (RNNs) • Due to their recurrent connections, RNNs are a more powerful and general form of ANN • Problems for which we typically use RNNs are very challenging: modeling, control, approximate dynamic programming • Interpretations of final structure & weights even more challenging than most other ANNs • Chaotic NNs ...? WCCI06 Vancouver, 27Jul06, Bill Howell slide 5 of 21

  6. II. Inspiration for DNA-ANNs • Genetic – non-protein-coding DNA and RNA • Brain models • Artificial Neural Networks – trends WCCI06 Vancouver, 27Jul06, Bill Howell slide 6 of 21

  7. II. Inspiration for DNA-ANNsnon-protein-coding RNA (npcDNA)Mattick, John S. (UQueensland) "The hidden genetic program of complex organisms" Scientific American, Oct04 pp60-67. See http://imbuq.edu.au/groups/mattick DNA gene exon intron transcription Primary RNA transcript splicing Gene regulation Gene regulation Traditional concept of genes Degraded and recycled Intronic RNA Assembled exonic RNA Processing mRNA Processing Translation Protein MicroRNAs and others Noncoding RNA Other functions Other functions Other functions WCCI06 Vancouver, 27Jul06, Bill Howell slide 7 of 21

  8. II. Inspiration for DNA-ANNsMattick: Cambrian Complexity Explosion Mattick, John S. (UQueensland) "The hidden genetic program of complex organisms" Scientific American, Oct04 pp60-67. See http://imbuq.edu.au/groups/mattick Animals Plants Fungi Origin of new regulatory system? Multicellular world Single-celled eukaryotes Complexity Unicellular world Eubacteria Archae 4,000 3,000 2,000 1,000 Present Time (millions of years ago) WCCI06 Vancouver, 27Jul06, Bill Howell slide 8 of 21

  9. II. Inspiration for DNA-ANNsFinite automaton from DNA mechanismsShapiro & Benenson “Bringing DNA computers to life” Scientific American, May06, pp45-51 Active yes-yes software molecule Fokl Software strand 1 Gene 1 Gene 2 Gene 3 Gene 4 Protector strand Diagnostic molecule Software strand 2 Inactive drug Disease-associated mRNA WCCI06 Vancouver, 27Jul06, Bill Howell slide 9 of 21

  10. II. Inspiration for DNA-ANNsMarch of the Penguins S. Pinker, The language instinct: how the mind creates language, New York: William Morrow & Company, 1994, Perenniel Classics edition, 2000 JUST instinct? WCCI06 Vancouver, 27Jul06, Bill Howell slide 10 of 21

  11. II. Inspiration for DNA-RNNsModels of the Brain • Sensory systems • Motor • Memory • Cognition, planning • Behaviours WCCI06 Vancouver, 27Jul06, Bill Howell slide 11 of 21

  12. II. Inspiration for DNA-ANNsTrends with ANNs • Local, incremental learning approaches – neural gas models, evolving connectionist systems • Multi-phase ANN architectures – extreme learning machines, echo state networks • Ensemble solutions – and hierarchies, networks • Signal processing & information theoretics • Recurrent Neural Networks (RNNs) • Evolution of ANNs WCCI06 Vancouver, 27Jul06, Bill Howell slide 12 of 21

  13. III. What might we hope to achieve with DNA-ANNs? • Starting with the right answer! • Higher levels of abstraction • Rapid and effective: • learning (generalisations) • evolution (restructure for strategies) • Resource utilisation – reuse of "modules" • Control & ADP – faster, more reliable, more robust WCCI06 Vancouver, 27Jul06, Bill Howell slide 13 of 21

  14. III. What might we hope to achieve with DNA-ANNs?Starting with the right answer • Trivial solution – give me the answer and I'll solve the problem ultra fast! • Measures of problem similarity - perhaps at higher levels of abstraction, especially when data appears dissimilar (reminiscent of generality of signal processing) WCCI06 Vancouver, 27Jul06, Bill Howell slide 14 of 21

  15. III. What might we hope to achieve with DNA-ANNs?Higher levels of abstraction • Problems - decompose & modularise For example, ANNs can regenerate learned images from noisy data. Can a similar feat be accomplished for problem decomposition/ modularisation at abstract levels to help evolve ensembles of ANNs? • Okham's razor (simplest models that explains the data) - may NOT always be a good approach with complex systems!? • Meaning/ logic – as emergent properties WCCI06 Vancouver, 27Jul06, Bill Howell slide 15 of 21

  16. III. What might we hope to achieve with DNA-ANNs?Rapidity, Resources • Rapid, effective, safe : training -> learning -> evolving fit generalize strategize • Resource utilisation – re-utilize "functional and connecting modules“, functional overloading, multiple simultaneous hypothesis WCCI06 Vancouver, 27Jul06, Bill Howell slide 16 of 21

  17. III. What might we hope to achieve with DNA-ANNs?Non-linear dynamical systems Modeling and Control Perhaps the biggest payback for DNA-ANNs would be their application to the special, but important, case of RNNs. WCCI06 Vancouver, 27Jul06, Bill Howell slide 17 of 21

  18. IV. Recommendations and Star-Gazing Question: Current algorithms for learning and evolving – are they adequate for more complex hierarchies and ensembles of ANNs, and for more abstract capabilities? • To some extent - yes? • I suspect – that we are also looking for additional formulations, and that to some extent their initial development may depend on having a set powerful, predictable and robust "modules“ as a starting point. • Two examples of what this might connect to: • Local and global brain models – elegant, powerful ways of building systems • Classical AI and symbolic logic are an extreme example of “new” learning formulations for higher-level-abstraction ANNs WCCI06 Vancouver, 27Jul06, Bill Howell slide 18 of 21

  19. IV. Recommendations and Star-GazingArtificial Neural Networks (ANNs) Existing CI capabilities are a basis: • Start with a “small-world universal function approximation" collection of ANN and RNN modules (custom built or selected from a variety of problem solutions) • Develop "generic interfaces" between combinations of two or more modules, or modules of modules • Develop "problem formulation/ classification" capabilities (rules, evolutionary strategies etc) • ANN phase changes (crystalline -> gaseous) • Develop learning / evolving strategies that can do points 1 to 4 above • Chaos – perhaps scramble through state-space, but DON’T get locked in to pre-existing structures WCCI06 Vancouver, 27Jul06, Bill Howell slide 19 of 21

  20. IV. Recommendations and Star-GazingRecurrent Neural Networks • My feeling is that because of their great power and the difficulty of rapidly training them, RNNs offer a challenge whereby DNA-RNNs may show tangible benefits that have a qualitative benefit beyond merely speeding up training and providing good generalisation. • Question: Will the "genetic specification" of DNA-RNNs beat hand-crafted libraries (likely the starting point)? • Play with & observe DNA-RNNs WCCI06 Vancouver, 27Jul06, Bill Howell slide 20 of 21

  21. Conclusions Biological computational/ processing capabilities have always been the holy grail of advanced computing. As we advance, that brings us to an awareness of the next level of concepts. This process may go on for a long time.... Right now, the genetics revolution is suggestive of DNA-RNNs.

More Related