1 / 19

Learning from Inconsistencies in an Integrated Cognitive Architecture

Learning from Inconsistencies in an Integrated Cognitive Architecture. The First Conference on Artificial General Intelligence (AGI-08) March 1st, 2008 Kai-Uwe Kühnberger (with Peter Geibel, Helmar Gust, Ulf Krumnack, Ekaterina Ovchinnikova, Angela Schwering, Tonio Wandmacher).

palti
Download Presentation

Learning from Inconsistencies in an Integrated Cognitive Architecture

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Learning from Inconsistencies in an Integrated Cognitive Architecture The First Conference on Artificial General Intelligence (AGI-08) March 1st, 2008 Kai-Uwe Kühnberger (with Peter Geibel, Helmar Gust, Ulf Krumnack, Ekaterina Ovchinnikova, Angela Schwering, Tonio Wandmacher) Kai-Uwe Kühnberger et al. Universität Osnabrück The First Conference on Artificial General Intelligence (AGI-08) Memphis, March 1st, 2008

  2. Overview • Introduction • Learning in Cognitive Systems • The I-Cog Architecture • General Overview of the System • Learning from Inconsistencies • General Remarks • Learning from Inconsistencies in Analogy Making and the Overall System • Conclusions Kai-Uwe Kühnberger et al. Universität Osnabrück The First Conference on Artificial General Intelligence (AGI-08) Memphis, March 1st, 2008

  3. Introduction Learning in Cognitive Systems Kai-Uwe Kühnberger et al. Universität Osnabrück The First Conference on Artificial General Intelligence (AGI-08) Memphis, March 1st, 2008

  4. Learning • Usually cognitive architectures are based on a number of different modules. • Example: Hybrid System • Obviously, coherence problems and consistency clashes can occur, in particular, in hybrid systems. • In hybrid architectures, two main questions can be asked: • On which level should learning be implemented? • What are plausible strategies in order to resolve inconsistencies? • Idea of this talk: Use occurring inconsistencies as a mechanism (trigger) of learning. Kai-Uwe Kühnberger et al. Universität Osnabrück The First Conference on Artificial General Intelligence (AGI-08) Memphis, March 1st, 2008

  5. The I-Cog Architecture General Overview Kai-Uwe Kühnberger et al. Universität Osnabrück The First Conference on Artificial General Intelligence (AGI-08) Memphis, March 1st, 2008

  6. A Proposal: I-Cog • I-Cog is a modular system consisting of three main modules: • Analogy Engine (AE): • Claim: AE is able to cover a variety of different reasoning abilities. • Ontology Rewriting Device (ORD): • Claim: Ontological background knowledge needs to be implemented in a way, such that dynamic updates are possible. • Neuro-Symbolic Learning Device (NSLD): • Claim: The neuro-symbolic learning device enables robust learning of symbolic theories form noisy data. • Finally: these three modules interact in a non-trivial way and are governed by a heuristic-driven Control Device (CD). • Kühnberger, K.-U. et al. (2007): I-Cog: A Computational Framework for Integrated Cognition of Higher Cognitive Abilities, in Proceedings of MICAI 2007, LNAI 4827, pp. 203-214, Springer. Kai-Uwe Kühnberger et al. Universität Osnabrück The First Conference on Artificial General Intelligence (AGI-08) Memphis, March 1st, 2008

  7. The Overall I-Cog Architecture Kai-Uwe Kühnberger et al. Universität Osnabrück The First Conference on Artificial General Intelligence (AGI-08) Memphis, March 1st, 2008

  8. Learning in I-Cog • Learning is based on occurring inconsistencies • In the case of ORD, rewriting algorithms make sure that inconsistencies are resolved (where this is possible). • Ovchinnikova, E. & Kühnberger, K.-U. (2007). Debugging Automatically Extended Ontologies, GLDV-Journal for Computational Linguistics and Language Technology, 23(2):19-33 . • NSLD is a learning device, where weights are adjusted based on backpropagation of errors. • Gust, H., Kühnberger, K.-U. & Geibel, P. (2007). Learning Models of Predicate Logical Theories with Neural Networks Based on Topos Theory, in P. Hitzler & B. Hammer (eds.): Perspectives of Neural-Symbolic Integration, Series “Computational Intelligence”, Springer, pp. 209-240. • In the case of AE, it is possible to reduce many adaptation processes to occurring inconsistencies. • Claim 1: Learning is distributed over the whole system. • Claim 2: Learning takes place because errors / inconsistencies occur triggering an adaptation process. Kai-Uwe Kühnberger et al. Universität Osnabrück The First Conference on Artificial General Intelligence (AGI-08) Memphis, March 1st, 2008

  9. Learning from Inconsistencies The Example of Analogical Reasoning Kai-Uwe Kühnberger et al. Universität Osnabrück The First Conference on Artificial General Intelligence (AGI-08) Memphis, March 1st, 2008

  10. General Remarks • Inconsistencies are classically connected to logic • If for a set of axioms  (relative to a language L)  can be entailed and  can be entailed, then  is inconsistent. • We use the term “inconsistency” rather loosely and do not restrict this concept to logic. Here are some examples: • Every analogy establishes a relation that resolves a clash of concepts, information, interpretations etc. • Gust, H. & Kühnberger, K.-U. (2006). Explaining Effective Learning by Analogical Reasoning, 28th Annual Conference of the Cognitive Science Society, pp. 1417-1422. • Ontology generation / learning • Ovchinnikova, E., Wandmacher, T. & Kühnberger, K.-U. (2007). Solving Terminological Inconsistency Problems in Ontology Design, IBIS 4:65-80. • Non-monotonicity effects in reasoning. • Ovchinnikova, E. & Kühnberger, K.-U. (2006). Adaptive ALE-TBox for Extending Terminological Knowledge, in Proceedings of AI’06, LNAI 4304, Springer, pp. 1111-1115. Kai-Uwe Kühnberger et al. Universität Osnabrück The First Conference on Artificial General Intelligence (AGI-08) Memphis, March 1st, 2008

  11. The Analogy Engine • The Analogy Engine is based on Heuristic-Driven Theory Projection (HDTP). • HDTP is a mathematically sound theory of computing analogies. • It is based on anti-unification of a source theory ThS and a target theory ThT. • It was applied to various domains like naïve physics, metaphors, geometric figures etc. • Some features: • Complex formulas can be anti-unified. • A theorem prover allows the re-representation of formulas. • Whole theories can be generalized. • The involved processes are governed by heuristics. • Gust, H., Kühnberger, K.-U. & Schmid, U. (2006). Metaphors and Heuristic-Driven Theory Projection (HDTP), Theoretical Computer Science, 354:98-117. Kai-Uwe Kühnberger et al. Universität Osnabrück The First Conference on Artificial General Intelligence (AGI-08) Memphis, March 1st, 2008

  12. Recursion Example I For the generalized theory, the following substitutions need to be established: 1: E  0, Op1  add, Op2  s 2: E  s(0), Op1  mult, Op2  z.add(x,z) Kai-Uwe Kühnberger et al. Universität Osnabrück The First Conference on Artificial General Intelligence (AGI-08) Memphis, March 1st, 2008

  13. Recursion Example II Trying to anti-unify 1 and 1 is not possible. But by using axioms 1 and 2 we can derive mult(s(0),x) = add(x,mult(0,x)) = add(x,0) = … = add(0,x)Hence we can derive: 3: x: mult(s(0),x) = xFor the generalized theory, the following substitutions can be established: 1: E  0, Op  add and 2: E  s(0), Op  mult Kai-Uwe Kühnberger et al. Universität Osnabrück The First Conference on Artificial General Intelligence (AGI-08) Memphis, March 1st, 2008

  14. Conclusion • Main claims: • In cognitive architectures “inconsistencies” (as used in the broad sense here) should be considered as a trigger for learning and adaptation. • These adaptation processes can be relevant for: • Adapting background knowledge, • Reasoning processes of various types, • Neuro-based learning approaches. • Learning in the systems is therefore distributed and continuously realized. Kai-Uwe Kühnberger et al. Universität Osnabrück The First Conference on Artificial General Intelligence (AGI-08) Memphis, March 1st, 2008

  15. Thank you very much!! Questions? Kai-Uwe Kühnberger et al. Universität Osnabrück The First Conference on Artificial General Intelligence (AGI-08) Memphis, March 1st, 2008

  16. References • Analogical Reasoning (Selection) • Gust, H., Kühnberger, K.-U. & Schmid, U. (2006). Metaphors and Heuristic-Driven Theory Projection (HDTP), Theoretical Computer Science, 354:98-117. • Gust, H. & Kühnberger, K.-U. (2006). Explaining Effective Learning by Analogical Reasoning, in: R. Sun & N. Miyake (eds.): 28th Annual Conference of the Cognitive Science Society, Lawrence Erlbaum, pp. 1417-1422. • Gust, H., Krumnack, U., Kühnberger, K.-U. & Schwering, A. (2007). An Approach to the Semantics of Analogical Relations, in S. Vosniadou et al. (eds.): Proceedings of EuroCogSci 2007, Lawrence Erlbaum, pp. 640-645. • Krumnack, U., Schwering, A., Gust, H. & Kühnberger, K.-U. (2007). Restricted Higher-Order Anti-Unification for Analogy Making, to appear in Proceedings of AI’07, Springer. • Gust, H., Krumnack, U., Kühnberger, K.-U. & Schwering, A. (2008). Analogical Reasoning: A Core of Cognition, to appear in Künstliche Intelligenz 1/2008. Kai-Uwe Kühnberger et al. Universität Osnabrück The First Conference on Artificial General Intelligence (AGI-08) Memphis, March 1st, 2008

  17. References • Neuro-Symbolic Integration (Selection) • Gust, H., Kühnberger, K.-U. & Geibel, P. (2007).Learning and Memorizing Models of Logical Theories in a Hybrid Learning Device, to appear in Proceedings of ICONIP 2007, Springer. • Gust, H., Kühnberger, K.-U. & Geibel, P. (2007). Learning Models of Predicate Logical Theories with Neural Networks Based on Topos Theory, in P. Hitzler & B. Hammer (eds.): Perspectives of Neural-Symbolic Integration, Series “Computational Intelligence”, Springer, pp. 209-240. • Ontology Rewriting (Selection) • Ovchinnikova, E. & Kühnberger, K.-U. (2007). Debugging Automatically Extended Ontologies, GLDV-Journal for Computational Linguistics and Language Technology, volume 23(2). • Ovchinnikova, E., Wandmacher, T. & Kühnberger, K.-U. (2007). Solving Terminological Inconsistency Problems in Ontology Design, International Journal of Interoperability in Business Information Systems, 4:65-80. • Ovchinnikova, E. & Kühnberger, K.-U. (2006). Adaptive ALE-TBox for Extending Terminological Knowledge, in A. Sattar & B. H. Kang (eds.): Proceedings of AI’06, LNAI 4304, Springer, pp. 1111-1115. Kai-Uwe Kühnberger et al. Universität Osnabrück The First Conference on Artificial General Intelligence (AGI-08) Memphis, March 1st, 2008

  18. References • I-Cog • Kühnberger, K.-U., Geibel, P., Gust, H., Krumnack, U., Ovchinnikova, E., Schwering, A. & Wandmacher, T. (2008): Learning from Inconsistencies in an Integrated Cognitive Architecture, to appear in Proceedings of AGI 2008, IOS Press. • Kühnberger, K.-U. (2007): Principles for the Foundation of Integrated Higher Cognition (Abstract). In: D. S. McNamara & J. G. Trafton (Eds.), Proceedings of the CogSci 2007, (p. 1796). Austin, TX: Cognitive Science Society. • Kühnberger, K.-U., Wandmacher T., Schwering, A., Ovchinnikova, E., Krumnack, U., Gust, H. & Geibel, P. (2007): I-Cog: A Computational Framework for Integrated Cognition of Higher Cognitive Abilities, in Proceedings of MICAI 2007, LNAI 4827, pp. 203-214, Springer. • Kühnberger, K.-U., Wandmacher, T., Schwering, A., Ovchinnikova, E., Krumnack, U., Gust, H. & Geibel, P. (2007): Modeling Human-Level Intelligence by Integrated Cognition in a Hybrid Architecture, in P. Hitzler, T. Roth-Berghofer, S. Rudolph: FAInt-07, Workshop at KI 2007, CEUR-WS, vol. 277, pp. 1-15. Kai-Uwe Kühnberger et al. Universität Osnabrück The First Conference on Artificial General Intelligence (AGI-08) Memphis, March 1st, 2008

  19. Peter Geibel Karl Gerhards Helmar Gust Ulf Krumnack Kai-Uwe Kühnberger Jens Michaelis Ekaterina Ovchinnikova Angela Schwering Konstantin Todorov Ulas Türkmen Tonio Wandmacher Members of the AI group Kai-Uwe Kühnberger et al. Universität Osnabrück The First Conference on Artificial General Intelligence (AGI-08) Memphis, March 1st, 2008

More Related