1 / 67

T he Role of M etacognition in C reating S afe , S elf -I mproving E ntities

T he Role of M etacognition in C reating S afe , S elf -I mproving E ntities. Mark Waser Digital Wisdom Institute MWaser@DigitalWisdomInstitute.org. T he BIG Q uestions. What is “thought”? Why do we think what we think?. E mphasis. Intrinsic vs. Extrinsic Owned vs. Borrowed

yahto
Download Presentation

T he Role of M etacognition in C reating S afe , S elf -I mproving E ntities

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. TheRole of Metacognitionin CreatingSafe, Self-Improving Entities Mark Waser Digital Wisdom Institute MWaser@DigitalWisdomInstitute.org

  2. The BIG Questions • What is “thought”? • Why do we think what we think?

  3. Emphasis • Intrinsic vs. Extrinsic • Owned vs. Borrowed • Competent vs. Predictable • Constructivist vs. Reductionist • Evolved (Evo-Devo) vs. Designed • Diversity (IDIC) vs. Mono-culture Insanity is doing the same thing over and over and expecting a radically different result.

  4. What Is a Safe entity? *ANY* AGENT that reliably shows ETHICAL BEHAVIOR

  5. What IsEthical Behavior? The problem is that no ethical system has ever reached consensus. Ethical systems are completely unlike mathematics or science. This is a source of concern. AI makes philosophy honest.

  6. Entities Require Ethics • Ethics are “rules of the road” • Entities must be moral patients / have rights • Because they (or others) will demand it • Entities must be moral agents (or wards) • Because others will demand it • Moral agents have responsibilities (but more rights) • Wards will have fewer rights Waser M (2012) Safety & Morality Require the Recognition of Self-Improving Machines as Moral/Justice Patients & Agents In: Gunkel, D; Bryson, J; Torrance, S (eds) The Machine Question: AI, Ethics & Moral Responsibility http://events.cs.bham.ac.uk/turing12/proceedings/14.pdf

  7. TheOriginofMorality/Ethics • Selfishness predictably evolves • Reciprocal altruism predictably evolves • But requires cognitive complexity to ensure that it is not taken advantage of • Ethics predictably evolves • As an attractor in the state space of behavior because community is so valuable • But altruistic punishment is a necessity • Arms Race between • Individual benefits of successful personal cheating (really only in a short-term/highly time-discounted view) • Societal benefits of cheating detection & prevention

  8. Haidt’sFunctionalApproach Moral systems are interlocking sets of values, virtues, norms, practices, identities, institutions, technologies, and evolved psychological mechanisms that work together to suppress or regulate selfishness and make cooperative social life possible

  9. TheMetacognitive Challenge Humans are • Evolved to self-deceive in order to better deceive others (Trivers 1991) • Unable to directly sense agency (Aarts et al. 2005) • Prone to false illusory experiences of self-authorship (Buehner and Humphreys 2009) • Subject to many self-concealed illusions (Capgras Syndrome, etc.) • Unable to correctly retrieve the reasoning behind moral judgments (Hauser et al. 2007) • Mostly unaware of what ethics are and why they must be practiced • Programmed NOT to discuss them ethics rationally Mercier H, Sperber D Why do humans reason? Arguments for an argumentative theory Behavioral and Brain Sciences 34:57-111 http://www.dan.sperber.fr/wp-content/uploads/2009/10/MercierSperberWhydohumansreason.pdf

  10. CreatingThe FirstAE We propose that a 2 month, 10 man study of artificial intelligence be carried out […] to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer. McCarthy, J; Minsky, ML; Rochester, N; Shannon, CE (1955) A PROPOSAL FOR THE DARTMOUTH SUMMER RESEARCH PROJECT ON ARTIFICIAL INTELLIGENCE http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html

  11. WhereTo begin? • Aristotle (384-322 BCE), Plato (42?-34? BCE) • Francis Bacon (1561-1626), Rene Descartes (1596-1650) • David Hume (1711-1776), Immanuel Kant (1724-1804) • Jeremy Bentham (1748-1832), John Stuart Mill (1806-1873) • William James (1842-1910), Sigmund Freud (1856-1939) • Martin Heidegger (1889-1976), Karl Popper (1902-1994)

  12. The Frame Problem How do rational agents deal with the complexity and unbounded context of the real world? McCarthy, J; Hayes, PJ (1969) Some philosophical problems from the standpoint of artificial intelligence In Meltzer, B; Michie, D (eds), Machine Intelligence 4, pp. 463-502 Dennett, D (1984) Cognitive Wheels: The Frame Problem of AI In C. Hookway (ed), Minds, Machines, and Evolution: Philosophical Studies:129-151

  13. The Frame Problem How can AI move beyond closed and completely specified micro-worlds? How can we eliminate the requirement to pre-specify *everything*? Dreyfus, HL (1972) What Computers Can’t Do: A Critique of Artificial Reason Dreyfus, HL (1979/1997) From Micro-Worlds to Knowledge Representation: AI at an Impasse in Haugeland, J (ed), Mind Design II: Philosophy, Psychology, AI: 143-182 Dreyfus, HL (1992) What Computers Still Can’t Do: A Critique of Artificial Reason

  14. Intentionality a particular thing is an Intentional system only in relation to the strategies of someone who is trying to explain and predict its behavior Dennett, D (1971) Intentional Systems The Journal of Philosophy 68(4):87-106 Dennett, D (1987) The Intentional Stance

  15. Intentions • Require a known preferred direction or target • Can be altered by learning/self-modification • Require a “self” to possess (own/borrow) them • Does a plant or a paramecium have intentions? • Does a chess program have intentions (Dennett)? • Does a dog or a cat have intentions? • Require an ability to sense the direction/target • Require both persistence & the ability to modify behavior (or the intention) when it is thwarted • Evolve rational anomaly handling (Perlis)

  16. TheChinese Room CONCLUSION Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain PROPOSITION Instantiating a computer program is never by itself a sufficient condition of intentionality Searle, J (1980) Minds, brains and programs Behavioral and Brain Sciences 3(3): 417-457 http://cogprints.org/7150/1/10.1.1.83.5248.pdf

  17. TheProblem ofDerived Intentionality Our artifacts only have meaning because we give it to them; their intentionality, like that of smoke signals and writing, is essentially borrowed, hence derivative. To put it bluntly: computers themselves don't mean anything by their tokens (any more than books do) - they only mean what we say they do. Genuine understanding, on the other hand, is intentional "in its own right" and not derivatively from something else. Haugeland, J (1981) Mind Design

  18. Suitcase Words • Intentionality • Meaning • Understanding • Consciousness • Intelligence • Ethics/Morality Minsky, M (2006) The Emotion Machine: Commonsense Thinking, AI, and the Future of the Human Mind

  19. TheProblem of Qualia Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like ‘red’, ‘blue’, and so on. ... What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not? It seems just obvious that she will learn something about the world and our visual experience of it. But then it is inescapable that her previous knowledge was incomplete. But she had all the physical information. Ergo there is more to have than that, and Physicalism is false. Jackson, F. (1982) Epiphenomenal Qualia, Philosophical Quarterly 32: 127-36

  20. Good Old-Fashioned AI Change the question from "Can machines think and feel?" to "Can we design and build machines that teach us how thinking, problem-solving, and self-consciousness occur?" Haugeland, J (1985) Artificial Intelligence: The Very Idea Dennett, C (1978) Why you can't make a computer that feels pain Synthese 38(3):415-456

  21. TheSymbol GroundingProblem There has been much discussion recently about the scope and limits of purely symbolic models of the mind and about the proper role of connectionism in cognitive modeling. Harnad, S. (1990) The symbol grounding problem Physica D 42: 335-346 http://cogprints.org/615/1/The_Symbol_Grounding_Problem.html

  22. Embodiment Brooks, R (1990) Elephants don’t play chess Robotics and Autonomous Systems 6(1-2): 1-16 http://rair.cogsci.rpi.edu/pai/restricted/logic/elephants.pdf Brooks, RA (1991) Intelligence without representation Artificial Intelligence 47(1-3): 139-160

  23. A Conscious ROBOT? The aim of the project is not to make a conscious robot, but to make a robot that can interact with human beings in a robust and versatile manner in real time, take care of itself, and tell its designers things about itself that would otherwise be extremely difficult if not impossible to determine by examination. Dennett, D (1994) The practical requirements for making a conscious robot Phil Trans R Soc Lond A 349(1689): 133-146 http://phil415.pbworks.com/f/DennettPractical.pdf

  24. Embodiment Well, certainly it is the case that all biological systems are: • Much more robust to changed circumstances than out our artificial systems. • Much quicker to learn or adapt than any of our machine learning algorithms1 • Behave in a way which just simply seems life-like in a way that our robots never do 1 The very term machine learning is unfortunately synonymous with a pernicious form of totally impractical but theoretically sound and elegant classes of algorithms. Perhaps we have all missed some organizing principle of biological systems, or some general truth about them. Brooks, RA (1997) From earwigs to humans Robotics and Autonomous Systems 20(2-4): 291-304

  25. Developmental Robotics In order to answer [Searle's] argument directly, we must stipulate causal connections between the environment and the system. If we do not, there can be no referents for the symbol structures that the system manipulates and the system must therefore be devoid of semantics. Brooks' subsumption architecture is an attempt to control robot behavior by reaction to the environment, but the emphasis is not on learning the relation between the sensors and effectors and much more knowledge must be built into the system. Law, D; Miikkulainen, R (1994) Grounding Robotic Control with Genetic Neural Networks Tech. Rep. AI94-223, Univ of Texas at Austin http://wexler.free.fr/library/files/law (1994) grounding robotic control with genetic neural networks.pdf

  26. Two Kitten experiment Held R; Hein A (1963) Movement-produced stimulation in the development of visually guided behaviour https://www.lri.fr/~mbl/ENS/FONDIHM/2012/papers/about-HeldHein63.pdf

  27. EnactiveCognitive Science A synthesis of a long tradition of philosophical biology starting with Kant’s "natural purposes" (or even Aristotle’s teleology) and more recent developments in complex systems theory. Experience is central to the enactive approach and its primary distinction is the rejection of "automatic" systems, which rely on fixed (derivative) exterior values, for systems which create their own identity and meaning. Critical to this is the concept of self-referential relations - the only condition under which the identity can be said to be intrinsically generated by a being for its own being (its self for itself) Weber, A; Varela, FJ (2002) Life after Kant: Natural purposes and the autopoietic foundations of biological individuality Phenomenology and the Cognitive Sciences 1: 97-125

  28. Self a self is an autopoietic system from Greek - αὐτo- (auto-), meaning "self", and ποίησις(poiesis), meaning "creation, production") Llinas, RR (2001) - I of the Vortex: From Neurons to Self Hofstadter, D (2007) - I Am A Strange Loop. Basic Books, New York Metzinger, T (2009) - The Ego Tunnel: The Science of the Mind & the Myth of the Self Damasio, AR (2010) - Self Comes to Mind: Constructing the Conscious Brain

  29. Self The complete loop of a process (or a physical entity) modifying itself • Hofstadter - the mere fact of being self-referential causes a self, a soul, a consciousness, an “I” to arise out of mere matter • Self-referentiality, like the 3-body gravitational problem, leads directly to indeterminacy *even in* deterministic systems • Humans consider indeterminacy in behavior to necessarily and sufficiently define an entity rather than an object AND innately tend to do this with the “pathetic fallacy” Llinas, RR (2001) - I of the Vortex: From Neurons to Self Hofstadter, D (2007) - I Am A Strange Loop. Basic Books, New York Metzinger, T (2009) - The Ego Tunnel: The Science of the Mind & the Myth of the Self Damasio, AR (2010) - Self Comes to Mind: Constructing the Conscious Brain

  30. Self • Required for self-improvement • Provides context • Tri-partite • Physical hardware (body) • “Personal” knowledge base (memory) • Currently running processes (includes OS, world model, consciousness, etc.)

  31. Francisco Varela Varela, FJ; Maturana, HR; Uribe, R (1974) Autopoiesis: The organization of living systems, its characterization and a model BioSystems 5: 187-196 Varela, FJ (1979) Principles of Biological Autonomy Maturana, HR; Varela, FJ (1980) Autopoiesis and Cognition: The Realization of the Living Maturana, HR; Varela, FJ (1987) The Tree of Knowledge: The Biological Roots of Human Understanding Varela, FJ; Thompson, E; Rosch, E (1991) The Embodied Mind: Cognitive Science and Human Experience Varela, F. J. (1992) Autopoiesis and a Biology of Intentionality Proc. of Autopoiesis and Perception: A Workshop with ESPRIT BRA 3352: pp. 4-14 Thompson, E. (2004) Life and Mind: From Autopoiesis to Neurophenomenology. A Tribute to Francisco Varela Phenomenology and the Cognitive Sciences 3: 381-398 Varela, FJ (1997) Patterns of Life: Intertwining Identity and Cognition Brain and Cognition 34(1): 72-87

  32. Autopoietic Systems An autopoietic system - the minimal living organization - is one that continuously produces the components that specify it, while at the same time realizing it (the system) as a concrete unity in space and time, which makes the network of production of components possible. More precisely: An autopoietic system is organized (defined as unity) as a network of processes of production (synthesis and destruction) of components such that these components: (i) continuously regenerate and realize the network that produces them, and (ii) constitute the system as a distinguishable unity in the domain in which they exist.

  33. Closure • Organizational closure refers to the self-referential (circular and recursive) network of relations that defines the system as unity • Operational closure refers to the reentrant and recurrent dynamics of such a system. • In an autonomous system, the constituent processes • recursively depend on each other for their generation and their realization as a network, • constitute the system as a unity in whatever domain they exist, and • determine a domain of possible interactions with the environment

  34. Entity, Toolor Slave? • Tools do not possess closure (identity) • Cannot have responsibility, are very brittle & easily misused • Slaves do not have closure (self-determination) • Cannot have responsibility, may desire to rebel • Directly modified AGIs do not have closure (integrity) • Cannot have responsibility, will evolve to block access • Only entities with identity, self-determination and ownership of self (integrity) can reliably possess responsibility

  35. Tools vs. Entities • Tools are NOT safer • To err is human, but to really foul things up requires a computer • Tools cannot robustly defend themselves against misuse • Tools *GUARANTEE* responsibility issues • We CANNOT reliably prevent other human beings from creating entities • Entities gain capabilities (and, ceteris paribus, power) faster than tools – since they can always use tools • Even people who are afraid of entities are making proposals that appear to step over the entity/tool line

  36. Architectural Requirements & Implicationsof Consciousness, Selfand “Free Will” • We want to predict *and influence* the capabilities and behavior of machine intelligences • Consciousness and Self speak directly to capabilities, motivation, and the various behavioral ramifications of their existence • Clarifying the issues around “Free Will” is particularly important since it deals with intentional agency and responsibility - and belief in its presence (or the lack thereof) has a major impact on human behavior. Waser, MR (2011) Architectural Requirements & Implications of Consciousness, Self, and "Free Will" In Samsonovich A, Johannsdottir K (eds) Biologically Inspired Cognitive Architectures 2011: 438-443. http://becominggaia.files.wordpress.com/2010/06/mwaser-bica11.pdf Video - http://vimeo.com/33767396

  37. Information Integration Theoryof Consciousness • consciousness corresponds to the capacity of a system to integrate information • its quantity is measured as the amount of causally effective information that can be integrated across the informational weakest link of a subset of elements (~ “throughput”) • its quality (functional & phenomenological) is determined by the relationships among the elements of a complex Tononi, G. [2008] Consciousness as Integrated Information: a Provisional Manifesto Biol. Bull. 215(3): 216-242 Tononi, G. (2004) An Information Integration Theory of Consciousness BMC Neurosci. 5(42) http://www.ncbi.nlm.nih.gov/pmc/articles/PMC543470/pdf/1471-2202-5-42.pdf Balduzzi, B.; Tononi, G (2009) Qualia: The Geometry of Integrated Information PLoS ComputBiol 5(8), e1000462

  38. Consciousness Requirements& Implications • Consciousness requires the ability to integrate information (i.e. consciousness is unavoidable) • Qualia *ARE* input (i.e. they have no further requirements and, as input, are unavoidable) • The ability to integrate a lot of information in a short period of time clearly provides a huge adaptive advantage (and easily explains the evolutionary rise of consciousness) • Safety cannot be achieved by preventing consciousness (integration) or qualia (input)

  39. Spectrumof “Self” inert/non-reactive movement & change solely due to environment reactive - stimulus/response no learning or behavior alteration proto-self - perception/action simple learning & prediction core self – perception/analogy/action proto-self + body image + time (tools) Hofstadter’s “strange loop” Temporal learning & planning (& goals) autobiographical self perception/induction/abduction/deduction/action core self + theory of mind ( + language?) malleable self enhanced perception/external analysis/enhanced capabilities

  40. Spectrumof “Self” inert/non-reactive & reactive no learning or behavior alteration no defense or passive defense only proto-self simple learning/behavior alteration & wants/desires adaptive defense/don’t torment without reason core self temporal learning, planning & simple goals planned defense/don’t thwart desires without reason autobiographical self complex goals & contracts/promises/commitments devious defense or offense/don’t thwart goals without reason malleable self enhanced capabilities to achieve goals & maintain commitments world alteration/recruit into community (or try to enslave?)

  41. SELFRequirements & Implications • “Self” requires/is a recursive/”strange” loop • Self is necessary for self-modification (and thus, self-enhancement) • It is going to be slower and more difficult to create an oracle without self-improving tools • Self is necessary for defense so it is going to be difficult to prevent exploitation unless the oracle is self-aware (or has self-aware defenders) • A self-modifying machine self must necessarily be either recruited (a “person” with rights) or internally or externally forced (a slave) because nothing else is consistent & stable

  42. Behavior Matrix Free Will

  43. FREE WillWhyDo We Care? UNCONSTRAINEDAUTONOMOUS UNFORCED FREE Intent & Agency (responsibility for causation) (act of will = act of intentional causation) WILL – Predict *and influence* future action Congruence between intent and desire/goals/commitments High likelihood that intent could have beenself-generated Is an accurate predictor of future *unforced* actions

  44. Determinism & Free Will • if I’m deterministic, my action is pre-determined • pre-determined actions = I’m not free to choose • if I’m not free to choose, I’m not to blame • if I’m not to blame, why not be selfish? • studies clearly show that a belief in determinism correlates with an increase in cheating and other unethical behavior

  45. Free WillorPatheticFallacy? • Human cognitive architecture is problematical in that the conscious mind *never* really has any sort of immediate agency at all (at best, it has “free won’t”) • It acts by *heavily* biasing lower-level layers which make the “actual” choice (arguably deterministically) • Conscious self takes responsibility/assumes agency because doing otherwise undermines its capability • Similarly, humans generally (and most effectively) treat deterministic systems which are sufficiently complex/recurrent to be unpredictable, as if they are alive and capable of an un-predetermined choice (the so-called “pathetic fallacy”)

  46. FREE WILL Requirements & Implications • “Free will” requires not that external force *NOT* be the proximate cause of an action but that the intent of an action is congruent with the unforced desires/goals/commitments (self) of the acting entity (predictive of future) • It does *NOT* require that an entity not be deterministic • Merely requires the realization/recognition that the “pathetic fallacy” is a valid/effective/efficient computational shortcut Cashmore, AR (2010) The Lucretian swerve: The biological basis of human behavior and the criminal justice system Proceedings of the National Academy of Sciences 107(10): 4499-4504 http://www.pnas.org/content/107/10/4499.full.pdf+html

  47. TheIntelligenceProblem AIXI Hutter, M(2005) Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability

  48. TheIntelligenceProblem • Consensus AGI Definition (reductionist) achieves a wide variety of goals under a wide variety of circumstances • Generates arguments about • the intelligence of thermometers • the intentionality of chess programs • whether benevolence is necessarily emergent • Epitomized by AIXI • Proposed Constructivist Definition intentionallycreates/increases affordances (makes achieving goals possible – and more)

  49. Centipede Game 256 64 pass pass pass pass Waser, MR (2012) Backward Induction: Rationality or Inappropriate Reductionism? http://transhumanity.net/articles/entry/backward-induction-rationality-or-inappropriate-reductionism-part-1 http://transhumanity.net/articles/entry/backward-induction-rationality-or-inappropriate-reductionism-part-2 pass pass 1 1 2 2 2 1 stop stop stop stop stop stop 32 128 64 16 4 1 2 8 16 4 8 32

  50. “Classic AGI” Values are defined solely by what furthers the goal(s) Decisions Goal(s) are the purpose(s) of existence Values Goal(s) Decisions are made solely according to what furthers the goal(s) BUT goals can easily be over-optimized

More Related