1 / 14

Li Su, Howard Bowman and Brad Wyble

Symbolic Encoding of Neural Networks using Communicating Automata with Applications to Verification of Neural Network Based Controllers*. Li Su, Howard Bowman and Brad Wyble Centre for Cognitive Neuroscience and Cognitive Systems, University of Kent, Canterbury, Kent, CT2 7NF, UK

Download Presentation

Li Su, Howard Bowman and Brad Wyble

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Symbolic Encoding of Neural Networks using Communicating Automata with Applications to Verification of Neural Network Based Controllers* Li Su, Howard Bowman and Brad Wyble Centre for Cognitive Neuroscience and Cognitive Systems, University of Kent, Canterbury, Kent, CT2 7NF, UK {ls68,hb5,bw5}@kent.ac.uk *To Appear in Neural-Symbolic Learning and Reasoning Workshop at Nineteenth International Joint Conference on Artificial Intelligence, EDINBURGH, SCOTLAND, 2005.

  2. Outline • Background: • Symbolic Computation • Sub-symbolic Computation • Motivation for integrating Symbolic and Sub-symbolic Computation • Cognitive Viewpoint • Application Viewpoint • Formal Methods • Model Checking • Specification • Properties • Result • Summary

  3. Background 1: Symbolic Computation • Traditional symbolic computation: • Systems have explicit elements that correspond to symbols organised in systematic ways, representing information in the external world. • Programmes or rules can manipulate these symbolic representations. • Key characteristic: symbol manipulation.

  4. Background 2: Sub-symbolic Computation • Connectionism/neural networks are computational models inspired by neuron physiology, which can be regarded as sub-symbolic computation: • Aims at massively parallel simple and uniform processing elements, which are interconnected. • Representations are distributed throughout processing elements.

  5. Motivation 1: Cognitive Viewpoint • It has been argued that cognition/mind can be regarded as symbolic computation. (E.g. SOAR, ACT-R and EPIC) • Sub-symbolic (neural network) architectures constitute abstract model of the human brain.

  6. Motivation 1: Cognitive Viewpoint (cont.) • Combining symbolic and sub-symbolic techniques to specify and justify behaviour of complex cognitive architectures in an abstract and suitable form. • Concurrent, Distributed Control, Hierarchical Decomposition • How do high-level cognitive properties emerge from interactions between low-level neuron components? • Our approach is to encode and reason about cognitive systems or neural networks in symbolic form. • E.g. Formal Methods. • Automatic mathematical analysis can be applied.

  7. Motivation 2: Application Viewpoint • Connectionist networks can be applied to extending traditional controllers in order to handle: • Catastrophic changes • Gradual degradation • Complex and highly non-linear systems • E.g. aircraft, spacecraft or robots • Reliability/Stability of adaptive systems (neural networks) needs to be guaranteed in safety/mission critical domains. • However, connectionist models rarely provide an indication of the accuracy or reliability of their predictions.

  8. Formal Methods: Model Checking • Automatic analysis technique, which can be applied at system design stage. • Checking whether a formal specification satisfies a set of properties, which are expressed in a requirements language. Inputs: specification properties Model Checker Output(s): Yes +Witness / No + Counter-example

  9. Tester Output Layer I1 H1 Input Layer Hidden Layer O1 I2 H2 An Example of a Neural Network Specification Note: this is not a realistic model of controller, but a “toy” model to evaluate the ability of model checking neural networks. NeuralNet Environment

  10. Neuron Automaton Middle Input Output k: identify of neuron; t: local clock; : activation of neuron i; i: pre-synaptic neuron identity; : speed of update; : activation of neuron k; j: post-synaptic neuron identity; : sigmoid function; : error; : net input; : weight; : learning rate.

  11. Requirements Language

  12. Requirements Language (cont.) • Reachability Properties: • E.g. • Safety Properties: • E.g. • Liveness Properties: • E.g. Note: the state formula success is true when SSE is less than a specified value.

  13. deadline success Result • The network satisfies the following properties and is guaranteed to learn XOR according to the required timing constraints using BP learning. It also guarantees the learning process is eventually stabilised. ……

  14. Summary • Formal methods are justifiable techniques to represent low-level neural networks. They can also help to understand how high-level cognitive properties emerge from interactions between low-level neuron components. • Formal methods may allow neural networks within engineering applications to be specified and justified at the system design stage. • Verifications may give theoretically well-founded ways to evaluate and justify learning methods. Some pproperties can be hard to justify by simulation. • Simulations can only test that something occurs, but are unable to test that something can never occur without explicit mathematical analysis. (An open issue.)

More Related