1 / 27

The title of my talk has changed…

The title of my talk has changed…. Ron Chrisley Director, COGS University of Sussex. What is the title of this talk?. "The Turin Decalogue : Ten Statements of Belief Concerning (Machine) Consciousness " "Machine Consciousness: How to get a Pentti from a Pentium "

Download Presentation

The title of my talk has changed…

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The title of my talk has changed… Ron Chrisley Director, COGS University of Sussex

  2. What is the title of this talk? • "The Turin Decalogue: Ten Statements of Belief Concerning (Machine) Consciousness" • "Machine Consciousness: How to get a Pentti from a Pentium" • "Robo-phenomenology" • "Artificial Phenomenology" • "Synthetic Phenomenology" √ Synthetic Phenomenology

  3. Overview • The CNM architecture • Connections to consciousness • Via Igor's axioms • Via imposed seriality (cf Bernie) • Focus on spatial phenomenology • General experience of "out-thereness" • Particular experience of spatial locales, actual and imagined • Proposed application to other areas • Sensory-motor contingency theory, and beyond Synthetic Phenomenology

  4. CNM architecture: Overview • Functional specification: Allow a mobile robot to Navigate its environment • Solution: Learn a forward model by minimising prediction error, then use and re-use this model in (clever!) ways • Components of the CNM • How can the components in 3. be learned and used to achieve 1.? Synthetic Phenomenology

  5. CNM: Functional specification Would like a "Map" that a mobile robot can: • Learn (and maintain) without a teacher • Use to re-orient when lost • Use to navigate to/away from desired/undesired places Synthetic Phenomenology

  6. Solution • Learn a forward model by minimising error while trying to predict what sensations will result from taking a particular action in the current location • Monitor prediction error in order to: • Detect when one is lost; and, if so, • Determine where one is on the "map" • Given a desired location (or sensation) and the current location, invert the forward model to derive an action plan Synthetic Phenomenology

  7. Components of the CNM • The Predictive Map • The Orienting System • The Inverse Model Synthetic Phenomenology

  8. The Predictive Map (1) • Function from actions and locations to sensations • "Given where I am now, what would I "see" if I were to move like this?" • Thus involves: • Imagination (in strict, sensory, sense) • Simulation (both self and world) • Deliberation (computing counterfactuals) Synthetic Phenomenology

  9. The Predictive Map (2) The Predictive Map itself has two components: • The T(for "Topological")-map • Function from actions and locations to locations • "Given where I am now, where would I be if I were to move like this?" • The D(for "Descriptive")-map • Function from locations to sensations • "What would I "see" if I were at a particular location?" How can the Predictive Map be implemented? Synthetic Phenomenology

  10. Predicted Sensations D-map PredictedLocation T-map Previous Predicted Location Action Key: Full Inter-Connection Between Layers Of Units Recurrent Connection (Copy) Synthetic Phenomenology

  11. Learning the Predictive Map • PM(action, location) = D(T(action, location)) • Thus, PM can be learned in a self-supervised manner: • Given current location, choose an action • Feed action and current location into T • Feed output of T (another location) into D • Output of D is a prediction of what sensor readings will be if action is taken • Take action; observe actual sensor readings (cf Cotterill's paradigm shift?) • Difference between predicted and observed sensor readings provides error signal for learning both D and T, so: • Change the parameters (weights) in D and T according to gradient descent (or what have you) Synthetic Phenomenology

  12. The Orienting System • Function from sensations to locations • "Given what I "see" (and my Predictive Map), where am I?" • No need to learn separately; rather, exploits information already in the Predictive Map Synthetic Phenomenology

  13. Using the Orienting System • Input a location (either random or best/last guess) into D to get a predicted sensation for location • Compare output of D to actual sensations • Use difference as error signal to perform gradient descent, not in D's weight space, but in the activation space of its input, location • Iterate until fixed point (error minimum) is reached • D's current input is a good and (context-sensitive) guess as to the actual current location Synthetic Phenomenology

  14. The Inverse Model • Function from locations and locations to actions • "Given two locations, what action will take me from one to the other?" • No need to learn separately; rather, exploits info in the Predictive Map Synthetic Phenomenology

  15. Using the Inverse Model: Planning • While exploring, store away (as desirable/undesirable) locations that occur while in a state of pleasure/pain (or learn mapping R1 from former to latter) • Input currentlocation and an action (either random or best/last guess into T • Compare output of T to goal location (obtained either through recall or by inverting R1) • Use difference as error signal to perform gradient descent, not in T's weight space, but in the activationspace of its input action • Iterate 2-4 until fixed point (minimum) is reached • T's final action "input" represents either • An action which will take one to the goal (for nearby goal locations); or • An action which will (it is hoped!) move one closer to the goal (for distant goal locations) Synthetic Phenomenology

  16. Imaginative Planning (1) The Orienting System and the Inverse Model can be combined to provide an "imaginative" (content-addressable) planner: Function from sensations and locations to actions "Given my current location and a set of desired sensations (and my Predictive Map), what action will get me from here to a place where I will have those sensations?'' Synthetic Phenomenology

  17. Imaginative Planning (2) • While exploring, store away (as desirable/undesirable) sensations that occur while in a state of pleasure/pain (or learn mapping R2 from former to latter) • Use Orienting System as before to find a location that provides not current sensations, but rather desired sensations (either stored, or obtained by inverting R2) • Use Inverse Model as before to find an action that will take one (closer) to the goal location (derived in 1) • Execute action • Iterate 2-4 until current sensations equals desired sensations Synthetic Phenomenology

  18. Connections to consciousness • We can use Igor's axioms to see how the CNM models (some aspects of) consciousness • Have already covered: • Imagination • Planning • Emotionally-guided (or at least motivationally-guided) decision/planning/action • But what about attention and perception? Synthetic Phenomenology

  19. Attention • The iterative nature of the orienting and planning systems implies that the CNM can focus on only one spatial location at a time • This imposes a seriality to the CNM's imaginative capacities which some (e.g., Bernie) take to be central to consciousness • But can also apply CNM to attention directly: Synthetic Phenomenology

  20. Attention: Extending SMCT • SMCT = Sensory-motor contingency theory of visual perception (O'Regan, Noë, etc.) • CNM actions are not movements of a mobile robot within a space, but changes of gaze/saccades • Can provide a mechanism for SMCT's mysterious "mastery" • Mechanism consists in a (multi-dimensional?) "confidence" measure obtained by, e.g., sampling prediction error • (Knowledge of) mastery consists in high confidence measure • Makes phenomenologicallypresent the perceptual space referenced in the error sampling Synthetic Phenomenology

  21. General spatial phenomenology (1) • This suggests a way of modeling the general experience of "out-thereness" in perception • Go back to CNM model of mobile robot navigation • Same "mastery" mechanism will make phenomenologicallypresent the regions of space which the robot "knows about" (I.e., for which there is a low error monitoring signal of the Predictive Map). • In some sense, we can also feel located in a space even when we do not know how that space is filled out • This would require monitoring error for the T-map (rather than the Predictive Map as a whole); well-defined Synthetic Phenomenology

  22. General spatial phenomenology (2) • Approach makes an "empirical" prediction: • If the the appearance of a space is randomised, and hence prediction error is minimised, the space will eventually cease to be phenomenologically present to the subject • This makes (spatial) phenomenology: • world-dependent • external • non-solipsistic • non-individualistic Synthetic Phenomenology

  23. Special spatial phenomenology • The CNM can also model more particular forms of spatial phenomenology: • The sensory appearance of the space (actual or imagined) around one • Can be displayed graphically • Issues: • Graphical display is, in general, underdetermined by the phenomenology • Should there be any constraints on what counts as a coherent phenomenological world? Synthetic Phenomenology

  24. CNM: Special spatial (hetero)phenomenology • For each action sequence A in the action space for which there is mastery (high confidence in prediction): • current location := T(a1,current location)) • s := D(current location) • Plot s at the location corresponding to current location • Increment i and go to 1. • Assumes perfectly systematic T-map • much more complex (but interesting!) otherwise Synthetic Phenomenology

  25. Phenomenology of thought • Shanahan: Perhaps thought in general consists in imagination: Thinking "horse" consists in playing mini-movies involving horses • Taylor: But how can these mini-movies be running simultaneously? Not consciously. • Shanahan: Perhaps subconsciously? • Answer: They aren't running at all Synthetic Phenomenology

  26. Phenomenology of thought: Extending SMCT • A horse being phenomenologicallypresent in thought consists in (knowledge of) one's capability of running mini-movies of a particular kind • For a horse to be phenomenologically present , the representation of horse would have to be accompanied by a "confidence" measure based on the "number" of coherent movies (discounted by their error values?) that have been generated using that representation • Is the purpose of dreaming to collect such statistics? Synthetic Phenomenology

  27. Interactive FAQ • The structure of space is invariant; why waste time learning it? • Doesn't re-identification of places require re-identification of objects? • What do location vectors represent before a fully systematic topology is learned? • Is there an alternative way of doing "imaginative planning" in the CNM? • What are the known limitations of the CNM? • What other proposed developments are there? • What prior work has been done on the CNM? What has been published? • Can I download the CNM and try it out? Synthetic Phenomenology

More Related