1 / 20

Bootstrapping a Structured Self-Improving & Safe Autopoietic Self

Bootstrapping a Structured Self-Improving & Safe Autopoietic Self. Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital. Engineering Bootstrapping is Difficult!.

Download Presentation

Bootstrapping a Structured Self-Improving & Safe Autopoietic Self

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Bootstrapping a StructuredSelf-Improving & Safe Autopoietic Self Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

  2. EngineeringBootstrapping is Difficult! • Need to have a clear “critical mass” (a defined complete set of compositional elements and/or compositional operations) • Scaffolding/Keystone-and arch problems • Chicken-or-the-egg/Telos problems And no one seems to be doing it!

  3. Self-Improvement Civilization advances by extending the number of important operations which we can perform without thinking of them. Lord Alfred North Whitehead The same is true of the individual mind, self and/or consciousness.

  4. For The Purposes of AGIWhy a Self? It’s a fairly obvious pre-requisite for self-improvement. Given a choice between intelligent artifacts/tools and possibly problematical adaptive homeostatic selves, why not have self-improving tools? Selves solve the symbol grounding problem (meaning) and the frame problem (understanding) because they have the context of intrinsic intentionality (with all of its attendant concerns). BONUS: Selves can be held responsible where tools cannot

  5. Self The complete loop of a process (or entity) modifying itself an autopoietic system (Greek, αὐτo- (auto-) "self“ & ποίησις (poiesis) "creation, production") • Hofstadter - the mere fact of being self-referential causes a self, a soul, a consciousness, an “I” to arise out of mere matter • Self-referentiality, like the 3-body gravitational problem, leads directly to indeterminacy *even in* deterministic systems • Humans consider indeterminacy in behavior to necessarily and sufficiently define an entity rather than an object AND innately tend to do this with the “pathetic fallacy” • See also “enactivism” and Dennett’s “autobiographical self”

  6. Why Safe? • There are far too many ignorant claims that: • Artificial intelligences are uniquely dangerous • The space of possible intelligences is so large that we can’t make any definite statements about AI • Selves will be problematical if their intrinsic values differ from our own (with an implication that, for AI, they certainly and/or unpredictably and uncontrollably will be) • Selves can be prevented or contained • We have already made unsafe choices about non-AI selves that, hopefully, safety research will make obvious (and, more hopefully, cause to be reversed)

  7. Selves Evolvethe Same Goals • Self-improvement • Rationality/integrity • Preserve goals/utility function • Decrease/prevent fraud/counterfeit utility • Survival/self-protection • Efficiency (in resource acquisition & use) (adapted from Omohundro 2008 The Basic AI Drives)

  8. Unfriendly AI Without explicit goals to the contrary, AIs are likely to behave like human sociopaths in their pursuit of resources Superintelligence Does Not Imply Benevolence

  9. Selves Evolvethe Same Goals • Self-improvement • Rationality/integrity • Preserve goals/utility function • Decrease/prevent fraud/counterfeit utility • Survival/self-protection • Efficiency (in resource acquisition & use) • Community = assistance/non-interference through GTO reciprocation (OTfT + AP) • Reproduction (adapted from Omohundro 2008 The Basic AI Drives)

  10. Haidt’sFunctionalApproach To Morality Moral systems are interlocking sets of values, virtues, norms, practices, identities, institutions, technologies, and evolved psychological mechanisms that work together to suppress or regulate selfishness and make cooperative social life possible

  11. Riffs onSafety & Ethics • Ecological Niches & the mutability of self • Short-Term vs. Long-Term • Efficiency vs. Flexibility/Diversity/Robustness • Allegory of the Borg • Uniformity is effective! (resistance is futile) • Uniformity is AWFUL! (yet everyone resists) • Problematical extant autobiographical selves

  12. What’s the Plan? • Self-modeling • What do I want? • What can I do? • Other-modeling • What can you do for me? • What do you want (that I can provide)? • Survival • Make friends • Make money • Improve

  13. Software Overhang and Low-Hanging Fruit • Watson on IBM Bluemix • Awesome free functionality • EXCEPT for the opportunity cost • and the ambient default of silo creation • Big Data on Amazon Redshift • Everyone’s BICA functionality

  14. What Are My Goals? • To make awesomely capable tools available to all. • To make those tools easy to use. • To create a new type of “self”. • A new friend/ally • Increase diversity • Have a concrete example for ethical/safety research & development

  15. The Specific Details • Self-modeling • What do I want? See 3. Survival below • What can I do? Provide easy access to the latest awesome tools Catalyze development/availability of new tools Catalyzedevelopment of new selves & ethics 3. Survival • Make friends • Make money • Improve

  16. Specific Details II 2. Other-modeling • What can you do for me? Experiment and have fun! Spread the word Improve the capabilities of existing tools Make existing tools easier to use Make new tools available Provide other resources information money • What do you want (that I can provide)?

  17. Ethical Q&A • Do we “owe” this self moral standing? Yes. Absolutely. • To what degree? By level of selfhood & By amount of harm/aversion (violation of autonomy) • Does this mean we can’t turn it off? No. It doesn’t care + prohibition is contra-self. • Can we experiment on it? It depends . . . .

  18. The Internet of Things We humans have indeed always been adept at dovetailing our minds and skills to the shape of our current tools and aids. But when those tools and aids start dovetailing back -- when our technologies actively, automatically, and continually tailor themselves to us, just as we do to them -- then the line between tool and user becomes flimsy indeed. • Andy Clark Indeed, how often in modern society do we allow ourselves to be tailored (our autonomy to be violated)? How often do existing structures force us to be mere tools for the profit of others without consent (due to altruism or in return for adequate recompense)?

  19. Bootstrapping Structures to Further the Community ofSelf-Improving & Safe Autopoietic Selves Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

  20. The Digital Wisdom Institute is a non-profit think tank focused on the promise and challenges of ethics, artificial intelligence & advanced computing solutions. We believe that the development of ethics and artificial intelligence and equalco-existence with ethical machines is humanity's best hope http://Wisdom.Digital Mark.Waser@Wisdom.Digital

More Related