1 / 19

Complex (Adaptive) Systems and Artificial General Intelligence Richard Loosemore Quick summary of my career:

Complex (Adaptive) Systems and Artificial General Intelligence Richard Loosemore Quick summary of my career: --->1981 Physics . Preparing for Ph.D. in quantum gravity (w/ John Taylor). 1981 Got hooked on Neural Nets by JT’s course (& GEB & Rubik).

catalina
Download Presentation

Complex (Adaptive) Systems and Artificial General Intelligence Richard Loosemore Quick summary of my career:

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Complex (Adaptive) Systems and Artificial General Intelligence Richard Loosemore Quick summary of my career: --->1981 Physics. Preparing for Ph.D. in quantum gravity (w/ John Taylor). 1981 Got hooked on Neural Nets by JT’s course (& GEB & Rubik). 1982-92 Cognitive Psychology and Artificial Intelligence. 1992-94 Research: Connectionist and Symbolic Models of Structured Cognition 1994-04 Software Engineering. [Director of Research, Star Bridge Systems]. Recently returned to the AGI field: implementing the software development environment that is to be used to test my theoretical work in AGI.

  2. Complex (Adaptive) Systems 340 years ago: Isaac Newton discovered beauty & power of mathematical physics. His reaction: awestruck, but never assumed entire universe worked that way. (... he went back to Alchemy) Everyone else: awestruck, and DID assume entire universe worked that way. Mathematics is a drug: so elegant & perfect that some people can’t live without it. In the AGI field (and cognitive science) we have to get over this addiction. Time to go back and do some ALCHEMY.

  3. Claims of This Paper 1) Complex Adaptive Systems (CAS) research has implications for AGI. (the prevailing strategy is misguided) 2) Specifically: need to shed our addiction to analytical mathematics. 3) More positively: - Need to subject our theoretical ideas to empirical scrutiny, - Do this by building AGIs that are COMPLETE .... - And do this in a way that allows SYSTEMATIC comparisons. 4) More specifically still: - We need a SOFTWARE DEVELOPMENT ENVIRONMENT... - ... based on a coherent FRAMEWORK for cognition ... - ... which allows us to build systems quickly ... - ... and rigorously compare them.

  4. The Take Home Message 1) There is ANOTHER WAY to approach Artificial General Intelligence .... a radically different way that has never been tried before. 2) Reasons to believe this approach could: a) Give extremely rapid results, b) Be relatively easy to do. (previous poor results: self-sabotage) 3) WATCH YOUR BACKS. You may ignore everything in this paper. Someone else, somewhere in the world, may not. They (unlike me) may not care about the Friendliness Problem. (They may not care for too much for your way of life, either).

  5. Complex Adaptive Systems Typically: - Large number of elements - Interactions between elements - Adaptation by the elements - Sensitive to changing environment Then, find mix of the the above that: - Does not go to zero activity - Does not go into a saturation or lock-up state - Just burbles along some between these extremes. Turns out these are relatively easy to find. (many, many examples)

  6. The CAS Observation CAS systems seem to show: - regularities in their global behavior that are - difficult / impossible to explain in terms of local interactions between elements. This is the GLOBAL-LOCAL DISCONNECT (GLD). NOTE: CAS folks also hoped to find rules or formalisms that apply ACROSS many different types of CAS systems (because there are regularities at that level) So far they have had little success. Some criticise the field for this failure. May be justified: but this has nothing to do with the GLD: that stands.

  7. The Global-Local Disconnect (GLD) More rigorously: Conjecture: there are large classes of systems for which there is no analytic theory that goes from local to global. Or: maybe some future theory could be devised, but its size would exceed the complexity of original system (still counts as theory?). Can formalize this, but not the same as Kolmogorov/Solomonoff/Chaitin complexity. The implication is that our naïve faith in mathematical physics is misplaced: there can be regularity without explanation. We just never noticed because we selected only the linear aspects of the universe for study. (And we were addicted to mathematics anyhow).

  8. Implications of the GLD Corollary: If you want a particular GLOBAL behavior, cannot just build local mechanisms by inspection. No analytic theory either way: local ----------- X -----------> global global ----------- X -----------> local QUESTION: If intelligent systems are complex adapive systems (CAS), would observation of high level cognition give us a direct line on the LOCAL mechanisms generating this high level stuff. CAS Reply: NO!

  9. Is Intelligence a Complex System? 1) Human intelligence: (a) Large numbers of INTERACTING elements? CHECK (concepts) (b) Adaptive? CHECK (learning) (c) Sensitive to environment? CHECK bonus: (d) Built by random trial and error? CHECK (evolution) (rather than layered design) 2) AGI (by some non-humanlike design): (a) - CHECK (b) - CHECK (c) - CHECK (d) - no! But what makes us think that (d) can nullify the effect of a + b + c?

  10. A Response to the CAS Critique This is about : - Vague claims of “emergence”, or - Trying to force intelligence to happen by emergence BUT We don’t need to take any notice because: - We see mechanisms that we build giving rise to intelligence and NO SIGNS of a disconnect (we understand the relationship between local and global). - We see no signs of emergence in our systems. - We see no need to try to INSERT some emergence, or rely on it! So we have empirical evidence pointing the opposite way. Maybe its a problem with human intelligence: But we don’t do it that way.

  11. The CAS Counter-Response An explanation of what is happening here. 1) People are using some intuition / insight to dig up some good mechanisms. 2) The way that the GLD problem would manifest is not what is supposed above: (a) Try to solve the GROUNDING PROBLEM (connect real symbols to real world without humans in the loop) ..... that would show up the problem. (b) Try to solve LEARNING PROBLEM ......... ditto. (c) Build COMPLETE SYSTEMS, not partial or narrow ones ........ ditto. (We claim to be tackling (c) ... whole point of AGI).

  12. CAS Counter-Response (continued) And if we were falling foul of the GLD problem, what would we expect to find? 1) Some progress, then stagnation due to inappropriate extension of mechanisms. 2) Avoidance of grounding and learning problems. 3) Plastering of cracks with probabilities. 4) Local (domain specific) patches. Narrow AI. 5) Baroque complexification of theories. Ptolemaic Epicycles. 6) Emphasis on theory, no sophisticated emprical tests (just anecdotal tests) 7) Claims of progress based on illusions such as: (a) Mass effect of knowledge databases. (b) Unintentional programmer assistance (design, choice of test, etc). 8) Obsession for mathematical formalism, to cover embarassment.

  13. Humanlike AGI Expect to see a focus on low level mechanisms as if there were a simple (non-CAS, direct, analytical, designed) relationship to high level. Witness the rise of neural modelling, often based on quasi-behaviorist (simplistic) theories of the high level aspects of cognition.

  14. The Strategy Overall plan: build systems that have chosen local rules, then observe the global behavior and do a systematic comparison of how changes in local rules affect the global behavior. This is empirical. This must take place in the context of a software development environment (SDE). Need to make it relatively easy to construct systems and then compare them. Need fast turnaround in the generate and test cycle. (cannot possibly hand-code every system) The purpose of the SDE is to define a research paradigm.

  15. Using Cognitive Science One assumption: need to take human cognition seriously. Building a model using what we know of the human case is just sensible. All very well to scorn the human mind and declare that we can doit using completely different techniques, but proofs or arguments are needed before we can be convinced that the alternative is even possible. (See previous) One goal of the SDE: to promote dialog and understanding with cognitive scientists. They have impoverished understanding of computation: cannot afford the time to become programmers/hackers. With a turnkey solution to system experimentation, their ideas would be shaken up. Cognitive scientists’ appreciation for the depth and richness of human cognition is invaluable .... but it is inarticulate, disorganized and corrupted by the research priorities of their science. (Get experiments done above all else. Troubles with newtonization). Don’t throw the baby out with the bath water.

  16. Addressing the Software Crisis Construction of ultracomplex software systems is difficult enough anyway (Software Crisis). So the SDE will take an extremely novel approach to the general SD problem anyway: - infuse the process with some serious detail about what goes on in the head of programmer, in order to lessen the problem. Type of software SDE is designed to create will be extremely fault tolerant from the ground up: it will be a primitive cognitive system already, before it evens starts being used to test cogsys ideas. Instead of layering a cog sys architecture on top of a conventional substrate, the very programming language will already have that structure implicit in it.

  17. SDE is Based on Cognitive Framework Another aspect of the SDE: designed around a framework for theorizing about cognitive systems. This theoretical framework is derived from cognitive science. Not meant to be too restrictive (not a theory itself, just a framework for thinking about certain theories and for instantiating them). As far as possible framework will be homogeneous. By reducing the number of different types of knowledge representations, for example, comparisons will be facilitated. Framework can be seen as a return to the original principles behind the connectionist movement, which died out shortly after backprop was discovered.

  18. The ‘Molecular’ Framework Purpose: to provide an outline model of cognition that is complete. To allow explanations for all the main aspects of cognition known from cognitive science ..... in principle. - Details are subject to specific empirical test (of theories derived from it). Overall character of the framework: a molecular soup of elements (concepts) that jump into a foreground area and try to form bonds with one another. All processes (including recognition, reasoning, planning, action) are driven by - dynamic relaxation (toward several simultaneous extrema) - input from external input, including motivational systems. Anyone could implement their own AGI theory within this framework.

  19. Conclusion My research program is: 1) Build a software development environment for rapid construction and comparison of AGI systems. 2) Use empirical tests as the way to find the appropriate mechanisms. 3) Guide this process by intuition and insight (made rigorous by (2)). 4) Use this SDE as a way to transform Cognitive Science into something more usable by AGI researchers. BUT: Because of the strong emphasis on learning and symbol grounding, and Because the approach has never been tried before, AND Because the CAS arguments point in this direction ........ ..... this may lead to an early achievement of AGI. It may simply not be that difficult. This is both heartening, and worrying.

More Related