1 / 59

I, Robot

I, Robot. Pat Hayes, IHMC, U. West Florida. CAP-2000. I, Robot. or What would it take to make a robot with a self?. I, Robot. or What would it take to make a robot with a sense of it self?. Philosophical debate about consciousness.

tasha-vang
Download Presentation

I, Robot

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. I, Robot Pat Hayes, IHMC, U. West Florida. CAP-2000

  2. I, Robot or What would it take to make a robot with a self?

  3. I, Robot or What would it take to make a robot with a sense of itself?

  4. Philosophical debate about consciousness • Maybe THIS is how consciousness works (yaddah, yaddah)…. • Pschaw! I can imagine something just like that without it being conscious. • I don’t think you can. • Oh no? Let me tell you, I can imagine something which is just like you, an exact copy right down to the atoms, and it behaves just like you and it even believes what you believe and wants what you want, but it’s not conscious. It’s just a zombie. So there. • That seems impossible to me. • You just havn’t got enough imagination, that’s all.

  5. Philosophical debate about consciousness • You just havn’t got enough imagination, that’s all. • ----------- • It’s hard to see quite how to argue against this claim directly, so rather than try to give SUFFICIENT conditions for consciousness, I’m going to sketch some NECESSARY conditions, to try to raise the imagination-jump bar a little higher.

  6. Philosophical debate about consciousness • You just havn’t got enough imagination, that’s all. • ----------- • It’s hard to see quite how to argue against this claim directly, so rather than try to give SUFFICIENT conditions for consciousness, I’m going to sketch some NECESSARY conditions, to try to raise the imagination-jump bar a little higher. • Basic idea is that consciousness requires a self.

  7. methodology • Want to give a functional account of what is essentially a matter of phenomenology • Danger of vacuous functional structure (Eg a C-box) • Some disciplinary rigor provided by requirement of evolutionary plausibility. No epiphanies. • Humans are complicated beasties, but we don’t have subjective reports from nonhumans. So we have to be willing to extrapolate to simpler cases.

  8. GOFCogSci Standard Model • Anything known is somehow internally represented as propositions expressed in a ‘language of thought’ • Senses keep internal world-description up to date • World-knowledge is used to plan, react, navigate, etc. • Awareness is restricted to content of LoT. • Cognitive activity involves ‘information processing’ in the LoT kiss...

  9. GOFCogSci Standard Model (with a small addition.) • Propositions in LoT come with provenances attached, ie information about where the proposition came from.

  10. GOFCogSci Standard Model(with a small addition.) • Propositions in LoT come with provenances attached, ie information about where the proposition came from.

  11. GOFCogSci Standard Model (with a small addition.) • Propositions in LoT come with provenances attached, ie information about where the proposition came from. on(cup,table)

  12. GOFCogSci Standard Model (with a small addition.) • Propositions in LoT come with provenanaces attached, ie information about where the proposition came from. this was seen on(cup,table)

  13. recorded in memory registered by sense S P explanation of Q P Confirmed by Q,R,... P P Inferred from Q,R,...

  14. GOFCogSci Standard Model (with a small addition.) • Provenances are under the control of the machinery. • They are needed for truth maintenance, ie keeping track of corrections. • (philosophical aside) Knowing a set of propositions might involve more than just knowing their conjunction.

  15. GOFCogSci Standard Model (with a small addition.) • Provenances are under the control of the machinery. • They are needed for ‘truth maintenance’, ie keeping track of corrections. • (philosophical aside) Knowing a set of propositions might involve more than just knowing their conjunction. • (This solves the Problem of Mary, by the way.)

  16. One approach to creating a Self • If something which can represent things needs to know about itself, just give it a way to represent itself to itself.

  17. Meta-management Deliberative Reasoning Reactive Mechanisms From A.Sloman 1999 One approach to creating a Self • If something which can represent things needs to know about itself, just give it a way to represent itself to itself. • Details get complicated. (Need a meta-theoretic self-description supported by a reflexive architectural layer…)

  18. One approach to creating a Self • If something which can represent things needs to know about itself, just give it a way to represent itself to itself. • BUT what is being described by this meta-theory?

  19. One approach to creating a Self • If something which can represent things needs to know about itself, just give it a way to represent itself to itself. • BUT what is being described by this meta-theory? • What does ‘I’ refer to? (Body, mind, soul, ego, Will,…?) Certainly not our own inference processes.

  20. One approach to creating a Self • If something which can represent things needs to know about itself, just give it a way to represent itself to itself. • BUT what is being described by this meta-theory? • What does ‘I’ refer to? (Body, mind, soul…?) Certainly not our own inference processes. • Are you the same “I” you were yesterday?

  21. “I look at those old movies, and I wonder how I did them. It was someone else who made them, not me. I can recognise part of me in them, but they were made by someone else, not by me.” - Terry Gilliam

  22. The human self-concept has several aspects

  23. The human self-concept has several aspects • bodily location (I am not in Kansas)

  24. The human self-concept has several aspects • bodily location (I am not in Kansas) • locus of narrative memory (I recall reading Proust)

  25. The human self-concept has several aspects • bodily location (I am not in Kansas) • locus of narrative memory (I recall reading Proust) • epistemic agent (I know I left it here somewhere.)

  26. The human self-concept has several aspects • bodily location (I am not in Kansas) • locus of narrative memory (I recall reading Proust) • epistemic agent (I know I left it here somewhere.) • social agent (Do I know you?)

  27. The human self-concept has several aspects • bodily location (I am not in Kansas) • locus of narrative memory (I recall reading Proust) • epistemic agent (I know I left it here somewhere.) • social agent (Do I know you?) • source of intentionality (I was referring to the mint sauce)

  28. The human self-concept has several aspects • bodily location (I am not in Kansas) • locus of narrative memory (I recall reading Proust) • epistemic agent (I know I left it here somewhere.) • social agent (Do I know you?) • source of intentionality (I was referring to the mint sauce) • the ‘free will’ (I’m in charge here.)

  29. The human self-concept has several aspects • bodily location (I am not in Kansas) • locus of narrative memory (I recall reading Proust) • epistemic agent (I know I left it here somewhere.) • social agent (Do I know you?) • source of intentionality (I was referring to the mint sauce) • the ‘free will’ (I’m in charge here.) • …and probably more.

  30. bodily location (I am not in Kansas) • locus of narrative memory (I recall reading Proust) • epistemic agent (I know I left it here somewhere.) • social agent (Do I know you?) • source of intentionality (I was referring to the mint sauce) • the ‘free will’ (I’m in charge here.)

  31. bodily location ‘mental map’ requires a ‘thishere’ token to relate perceptual input to position of body in the terrain. This is a primitive ‘sense of self’

  32. bodily location ‘mental map’ requires a ‘thishere’ token to relate perceptual input to position of subject in the terrain. This is a primitive ‘sense of self’ Purely geographical, it has no implications for mental state or agency. Required in some form by anything which navigates using non-egocentric spatial model. This is routine in AI robotics and probably evolved fairly early in animals. For things with an articulated body it gets quite complicated.

  33. locus of narrative memory We humans certainly have a well-developed narrative (episodic) memory; but what is it for?

  34. locus of narrative memory Episodic memory provides a source from which causal explanations can be extracted, providing a ‘temporal map’; a way to make predictions in the future; adds ‘now’ to ‘thishere’.

  35. locus of narrative memory Episodic memory provides a source from which causal explanations can be extracted, providing a ‘temporal map’; a way to make predictions in the future. ….abbfacytbbhabghjbaabbhafcasghbbrajkbbdaojkkllaa

  36. locus of narrative memory Episodic memory provides a source from which causal explanations can be extracted, providing a ‘temporal map’; a way to make predictions in the future. ….abbfacytbbhabghjbaabbhafcasghbbrajkbbdaojkkllaa

  37. locus of narrative memory Episodic memory provides a source from which causal explanations can be extracted, providing a ‘temporal map’; a way to make predictions in the future. ….abbfacytbbhabghjbaabbhafcasghbbrajkbbdaojkkllaa bb leads to a after a short delay

  38. locus of narrative memory Episodic memory provides a source from which causal explanations can be extracted, providing a ‘temporal map’; a way to make predictions in the future. ….abbfacytbbhabghjbaabbhafcasghbbrajkbbdaojkkllaa bb leads to a after a short delay ….ghfklbnmsdfbb(now I can see ahead ) Delicate balance needed; too general means weak predictions, too specific means narrow applicability. This is still a research area in AI.

  39. WARNING Here we enter somewhat wilder areas of speculation, where AI has never ventured. Please follow me carefully and stay alert.

  40. stability and fickleness • Unlike AI systems, organisms must eat, and are liable to get eaten. So they have a standing requirement to treat other organisms in a rather special way, one that may require sudden and precipitate action. • It would be folly to rely solely on induction to learn the causal habits of things that were liable to eat you. • Beasties need to make a conceptual division of the things in their surroundings into at least two categories: things which are causally predictable, and things which aren’t, but which require immediate attention when detected.

  41. stability and fickleness • Something is causally stable when one can reliably predict its future behavior on the basis of past experience with things of that sort, ie when it is reasonable to learn about its behavior by using induction.

  42. stability and fickleness • Something is causally stable when one can reliably predict its future behavior on the basis of past experience with things of that sort, ie when it is reasonable to treat it as having a learnable causal behavior. • It is causally fickle when one knows that it is not causally stable. Probably very old; examples from human experience include surprise when you find someone (but not someTHING) in your personal space unexpectedly (“making someone jump”). Seems to be a crucial distinction between other ‘agents’ and other things.

  43. animacy Being causally fickle is a basic aspect of animacy. Animate entities do things for their own reasons, not because they are causally influenced by other things. The ‘intentional stance’ (Dennett) or a description at the ‘knowledge level’ (Newell) represents one way to gain some predictive power over animate entities (and it’s pretty useful even for complicated inanimate ones.) Evidence of agency in unexpected places often are perceived as highly startling (eg movies, automobiles, reactive automata) until one gets used to their repertoire and feels able to recognise them. We are not very good at integrating these frameworks, eg tensions felt by surgeons. I suspect that notions like ‘agency’ and ‘intentionality’ in their full-blooded senses evolved only recently (humans and chimps may be the only creatures who attribute mental states to others), but causal fickleness is likely to be much older.

  44. Knowing about knowing The creature so far knows quite a lot about its world, and can learn more from its experience.

  45. Knowing about knowing The creature so far knows quite a lot about its world, and can learn more from its experience. But it doesn’t yet KNOW that it knows anything. It is not reflexively aware.

  46. Knowing about knowing The creature so far knows quite a lot about its world, and can learn more from its experience. But it doesn’t yet KNOW that it knows anything. It is not reflexively aware…. …but its provenance machinery ‘knows’ something about its own knowledge.

  47. Knowing about knowing The creature so far knows quite a lot about its world, and can learn more from its experience. But it doesn’t yet KNOW that it knows anything. It is not reflexively aware…. …but its provenance machinery ‘knows’ something about its own knowledge. Epistemic access to its own truth-adjusting machinery would be one way to achieve reflexivity of knowledge, ie knowing that it knows some of what it in fact knows.

  48. Knowing about knowing ‘Reflexivity’ of knowledge, ie knowing that it knows some of what it in fact knows, could be of actual practical use (unlike reflexive knowledge of its own cognitive machinery.) Eg one can take actions to fill gaps in ones own knowledge (exploration) or avoid taking actions when their outcome might depend critically on information known to be missing (not stepping into the dark).

  49. Knowing about knowing ‘Reflexivity’ of knowledge, ie knowing that it knows some of what it in fact knows, could be of actual practical use (unlike reflexive knowledge of its own cognitive machinery.) Eg one can take actions to fill gaps in ones own knowledge. (exploration) or avoid taking actions when their outcome might depend critically on information known to be missing. This is current AI research, eg NASA ‘reactive planners’.

  50. Epistemic gradients The creature so far knows quite a lot about its world, and can learn more from its experience. On the whole, it knows more about things closer to it in space and time, and less about things which are further away. There is an epistemic gradient with itself at the peak. The gradient can provide another way to identify a ‘self’: the self is the agent which knows things about this-here-now which nothing else knows.

More Related