1 / 27

Introduction to Developmental Learning

Introduction to Developmental Learning. 11 March 2014 Olivier.georgeon@liris.cnrs.fr http:// www.oliviergeorgeon.com. t. Old dream of AI.

lelia
Download Presentation

Introduction to Developmental Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Developmental Learning 11 March 2014 Olivier.georgeon@liris.cnrs.fr http://www.oliviergeorgeon.com t oliviergeorgeon.com

  2. Old dream of AI Instead of trying to produce a program to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education one would obtain the adult brain. Presumably, the child brain is something like a notebook […]. Rather little mechanism, and lots of blank sheets. […]. Our hope is that there is so little mechanism in the child brain that something like it can be easily programmed. The amount of work in the education we can assume, as a first approximation, to be much the same as for the human child. Computing machinery and intelligence (Alan Turing, 1950, Mind, philosophy journal). oliviergeorgeon.com

  3. Is it even possible? No ? Yes? • Spiritualist vision of consciousness (it would require a soul). • Causal openness of physical reality (quantum theory). • Too complex. • … • Materialist theory of consciousness • (JulienOffray de La Mettrie, 1709-1751). • Consciousness as a computational process • (Chalmers 1994) http://consc.net/papers/computation.html oliviergeorgeon.com

  4. Outiline • Example • Demo of developmental learning. • Theoretical bases • Pose the problem. • The question of self-programming. • Exercise • Implement your self-programming agent. oliviergeorgeon.com

  5. Example 1 6 Experiments 2 Results 10 Interactions (value) i1 (5) i2 (-10) 0 1 i3 (-3) 0 i4(-3) 0 i5(-1)i6 (-1) 1 0 i7 (-1) i8 (-1) 0 1 i9 (-1) i10 (-1) 0 1 The coupling agent/environment offers hierarchical sequential regularities of interactions, for example : After i8, i8 can often be enacted again. • After i7 , attempting i1 or i2 results more likely in i1than in i2 . After i8 , sequence i9, i3, i1 can often be enacted. • After i9, i3, i1, i8 , i4, i7, i1 can often be enacted.

  6. Exemple 1: • Move Forward or bump (5) (-10) • Turnleft / right (-3) • Feel right/ front / left (-1) Bump: Touch:

  7. Theoretical bases • Philosophy of mind . • Epistemology (theory of knowledge) • Developmental psychology. • Biology (autopoiesis, enaction). • Neurosciences. oliviergeorgeon.com

  8. Philosophy : is it possible? • John Locke (1632 – 1704) • « Tabula Rasa » • La Mettrie (1709-1751). • « Matter can think » • David Chalmers • A Computational Foundation for the Study of Cognition (1994) • Daniel Dennett • Consciousness explained (1991) • Free will, individual choice, self-motivation, déterminism… oliviergeorgeon.com

  9. Key philosophical ideas for DIA • cognition ascomputationin the broad sense. • Causal structure • Example: neural net with chemistry (neurotransmitters, hormones etc.). • Determinisme does not contradict free will. • Do not mistake determinism for predictibility. • HervéZwirn (Les systèmes complexes, 2006) oliviergeorgeon.com

  10. Epistémology (what can I know?) • Concept of ontology • Study of the nature of being • Aristotle (384 – 322 BC). • Onto: « being », Logos: discourse. • Discourse on the properties and categories of being. • Reality as such is unknowable • Emmanuel Kant, (1724 – 1804) oliviergeorgeon.com

  11. Key epistemological ideas for DAI • Implement learning mechanism with no ontological asumptions. • Agnostic agents (Georgeon 2012). • The agent will never know its environment as we see it. • But with interactional assumptions • Predefine the possibilities of interaction between the agent and its environment. • Let the agent alone to construct its own ontology of the environment through its experience of interaction. oliviergeorgeon.com

  12. Developmental psychology (How can I know?) • Developmental learning • Jean Piaget (1896 – 1980) • Teleology / motivational principles • ”the individual self-finalizes recursively”. • Do not separate perception and action a priori: • Notion of sensorimoteur scheme • Contructivist epistemology • Jean-Louis Le Moigne (1931 - ) • Ernst von Glasersfeld. • Knowledge is an adaptation in the functional sense. oliviergeorgeon.com

  13. Etapes développementales indicatives • Month 4: “Bayesian prediction”. • Month 5: Models of hand movement. • Month 6: Objects and face recognition. • Month 7: Persistency of objects. • Month 8: Dynamic models of objects. • Month 9: Tool use (bring a cup to the mouth). • Month 10: Gesture imitation, crawling. • Month 11: Walk with the help of an adult. • Month 15: Walk alone. oliviergeorgeon.com

  14. Key psychological ideas for DAI • Think in terms of « interactions » rather than separating perception and action a priori. • Focus on an intermediary level of intelligence: • Cognition sémantique (Manzotti & Chella 2012) Reasoning and language High level Semantic cognition Intermediary level stimulus-response adaptation Low level oliviergeorgeon.com

  15. Biology (why know?) • Autopoiese • auto: self, poièse: creation • Maturana (1972) • Structural coupling agent/environment. • Relational domain (the space of possibilities of interaction) • Homeostasis • Internal state regulation • Self-motivation • Theory of enaction • Self-creation through interaction with the environment. • Enactive Artificial Intelligence (Froezeand Ziemke2009) . oliviergeorgeon.com

  16. Key ideas from biology for DAI • Constitutive autonomy is necessary for sense-making. • Evolution of possibilities of interaction during the system’s life. • Individuation. • Design systems capable of programming themselves. • The data that is learned is not merely parameter values but is executable data. oliviergeorgeon.com

  17. Neurosciences Many levels of analysis A lot of plasticity AND a lot of pre-wiring oliviergeorgeon.com

  18. Neuroscience • Connectome of C. Elegans: 302 neurons. Entirely inborn connectomerather than acquired through experience oliviergeorgeon.com

  19. Human connectome http://www.humanconnectomeproject.org oliviergeorgeon.com

  20. Neurosciences • Examples of mammalian brains • No qualitative rupture : human cognitive functions (e.g., language reasoning) relies of brain structures that exist in other mammalian brains. (This does not mean there is no innate differences !). • The brain serves at organizing behaviors in time and space. oliviergeorgeon.com

  21. Key neuroscience ideas for DAI • Renounce the hope that it will be simple. • Maybe begin at an intermediary level and go down if it does not work? • Biology can be source of inspiration • Biologically Inspired Cognitive Architectures. • Importance of the capacity to internally simulate courses of behaviors. oliviergeorgeon.com

  22. Key ideas of the key ideas • The objective is to learn (discover, organze and exploit) regularities of interaction in time and space to satisfy innate criteria (survival, curiosity, etc.). • Without pre-encoded ontological knowledge • Which allows a kind of constitutive autonomy (self-programming). oliviergeorgeon.com

  23. Teaser for next course oliviergeorgeon.com

  24. Exercice oliviergeorgeon.com

  25. Exercice • Two possible experiences E = {e1,e2} • Two possible results R = {r1,r2} • Four possible interactions E x R = {i11, i12, i21, i22} • Two environments • env1: e1 -> r1 , e2 -> r2 (i12 et i21 are never enacted) • env2: e1 -> r2 , e2 -> r1 (i11 et i22 are never enacted) • Motivational systems: • mot1: v(i11) = v(i12) = 1, v(i21) = v(i22) = -1 • mot2: v(i11) = v(i12) = -1, v(i21) = v(i22) = 1 • mot2: v(i11) = v(i21) = 1, v(i12) = v(i22) = -1 • Implement un agent that learn to enact positive interactions without knowing its motivatins a priori (mot1 or mot2) neither its environnement (env1 or env2). • Write a rapport of behavioral analysis based on activity traces. oliviergeorgeon.com

  26. No hard-coded knowledge of the environment Agen{ … public Experience chooseExperience(){ If (env == env1 and mot == mot1) or (env == env2 and mot == mot2) return e1; else return e2; } } oliviergeorgeon.com

  27. Implementation public staticExperiencee1 = new experience(); Experience e2 = new experience(); public staticResultr1 = new result(); Resultr2 = new result(); public static Interaction i11 = new Interaction(e1,r1, 1); etc. Public staticvoid main() Agent agent = new Agent(); Environnement env = new Env1(); // Env2(); for(int i=0 ; i < 10 ; i++) e = agent.chooseExperience(r); r = env.giveResult(e); System.out.println(e, r, value); Class Agent public ExperiencechooseExperience(Result r) Class Environnement public ResultgiveResult(experiencee) Class Env1 Class Env2 Class Experience Class Result Class Interaction(experience, result, value) public intgetValue() oliviergeorgeon.com

More Related