1 / 8

Self-Organized Recurrent Neural Learning for Language Processing reservoir-computing

Self-Organized Recurrent Neural Learning for Language Processing www.reservoir-computing.org. April 1, 2009 - March 31, 2012 State from June 2009. The task. writing/speech source. feature stream. AI machine. (www.georgholzer.at). ( compuskills.com.cy ). ( coli.uni-saarland.de/~steiner/ ).

tannerj
Download Presentation

Self-Organized Recurrent Neural Learning for Language Processing reservoir-computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Self-Organized Recurrent Neural Learning for Language Processing www.reservoir-computing.org April 1, 2009 - March 31, 2012 State from June 2009

  2. The task writing/speech source feature stream AI machine (www.georgholzer.at) (compuskills.com.cy) (coli.uni-saarland.de/~steiner/) (introspectreangel.wordpress.com) • Speech and handwriting recognition = essentially same problem • Humans can do it -- but only after years of learning: thus, a very difficult problem • No human-level AI solution in sight

  3. Mission Establish neurodynamical architectures as viable alternative to statistical methods for speech and handwriting recognition. (From Dominey et al 1995) (From Rabiner 1990, classical speech recognition tutorial) • State-of-the-art • Speech recognition = statistical data analysis problem • Leads to data-driven, feedforward "serial" learning and representation techniques (HMMs) • Performance appears to asymptote well below human performance • ORGANIC alternative • Speech recognition = an achievement of human brains • Leads to neural computation and cognitive neuroscience modelling with recurrent dynamics (cyclic top-down and bottom-up paths) • Potential to come closer to human performance

  4. Basic paradigm: reservoir computing (RC) • Also known as Echo State Networks and Liquid State Machines • Discovered in 2000, now an established paradigm in computational neuroscience and machine learning • RC makes, for the first time, training of recurrent neural networks practically feasible: a major enabling technology • RC is biologically plausible • Consortium comprises pioneers and leading investigators of RC field • Principle of RC: • Use large, fixed, random recurrent network as excitable medium • Excite by input signal • Read out desired output by trainable output weights (red)

  5. Scientific objectives • Basic blueprints: Design and proof-of-principle tests of fundamental architecture layouts for hierarchical neural system that can learn multi-scale sequence tasks. • Reservoir adaptation: Investigate mechanisms of unsupervised adaptation of reservoirs. • Spiking vs. non-spiking neurons, role of noise: Clarify the functional implications for spiking vs. non-spiking neurons and the role of noise. • Single-shot model extension, lifelong learning capability: Develop learning mechanisms which allow a learning system to become extended in “single-shot” learning episodes to enable lifelong learning capabilities. • Working memory and grammatical processing: Extend the basic paradigm by a neural index-addressable working memory. • Interactive systems: Extend the adaptive capabilities of human-robot cooperative interaction systems by on-line and lifelong learning capabilities. • Integration of dynamical mechanisms: Integrate biologically mechanisms of learning, optimization, adaptation and stabilization into coherent architectures.

  6. Community service and dissemination objectives • High performing, well formalized core engine: Collaborative development of a well formalized and high performing core Engine, which will be made publicly accessible. • Comply to FP6 unification initiatives: Ensure that the Engine integrates with the standards set in the FACETS FP6 IP, and integrate with other existing code. • Benchmark repository: Create a database with temporal, multi-scale benchmark data sets which can be used as an international touchstone for comparing algorithms.

  7. Consortium

  8. Workpackages and collaboration scheme

More Related