1 / 25

The Future of (Artificial) Intelligence

The Future of (Artificial) Intelligence. Stuart Russell University of California, Berkeley Université Pierre et Marie Curie. Why are we doing AI?. To create intelligent systems. Why are we doing AI?. To create intelligent systems The more intelligent, the better. Why are we doing AI?.

cissy
Download Presentation

The Future of (Artificial) Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Future of (Artificial) Intelligence Stuart Russell University of California, Berkeley Université Pierre et Marie Curie

  2. Why are we doing AI? To create intelligent systems

  3. Why are we doing AI? • To create intelligent systems • The more intelligent, the better

  4. Why are we doing AI? • To create intelligent systems • The more intelligent, the better • We believe we can succeed • Limited only by ingenuity and physics

  5. John McCarthy and Claude Shannon Dartmouth Workshop Proposal, 1956 An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made if [we] work on it together for a summer.

  6. Why are we doing AI? • To create intelligent systems • The more intelligent, the better • To gain a better understanding of human intelligence

  7. Why are we doing AI? • To create intelligent systems • The more intelligent, the better • To gain a better understanding of human intelligence • To magnify those benefits that flow from it

  8. Anil Ananthaswamy, “I, Algorithm: A new dawn for AI,” New Scientist, Jan 29, 2011

  9. An industry arms race • Once performance exceeds a minimum level, small improvements are worth billions • Speech • Text understanding • Object recognition • Automated vehicles • Domestic robots

  10. Military arms race

  11. What if we do succeed? “The first ultraintelligent machine is the last invention that man need ever make.” I. J. Good, 1965 Might help us avoid warand ecological catastrophes, achieve immortality and expand throughout the universe Success would be the biggest event in human history

  12. What if we do succeed? “The first ultraintelligent machine is the last invention that man need ever make.” I. J. Good, 1965 Might help us avoid warand ecological catastrophes, achieve immortality and expand throughout the universe Success would be the biggest event in human history … and perhaps the last

  13. So, if that matters….. • Along what paths will AI evolve? • What is the (plausibly reachable) best case? Worst case? • Can we affect the future of AI? • Technical or societal solutions? • What should we do now?

  14. This needs serious thought If a superior alien civilization sent us email saying, “We’ll arrive in 30-50 years”, would we just reply,“OK, call us when you get here, we’ll leave the light on”? The AI community needs a substantial institutional commitment, reasonably soon

  15. Precedent: Nuclear Physics Rutherford (1933): anyone who looks for a source of power in the transformation of the atoms is talking moonshine. Sept 12, 1933: The stoplight changed to green. Szilárd stepped off the curb. As he crossed the street time cracked open before him and he saw a way to the future, death into the world and all our woes, the shape of things to come. Szilard (1934): patent on nuclear chain reaction; kept secret

  16. Precedent: Nuclear Physics Hahn et al (1939): uranium fission Szilard and Fermi (1939): uranium chain reaction. “That night, there was very little doubt in my mind that the world was headed for grief.” Einstein/Szilard (1939): letter to Pres. Roosevelt urging development of nuclear weapons before Nazis Szilard et 70 al (1945): petition to end war by inviting Japanese to A-bomb test Emergency Committee of Atomic Scientists

  17. Precedent: Chemical and Biological Weapons 1925 Geneva Protocol banned use in warfare, but R&D, stockpiling continued Long negotiations (until 1972 for biological, 1992 for chemical weapons) 1975 (biological) and 1997 (chemical) treaties ban “development, production, acquisition, stockpiling, retention, transfer or use”

  18. Precedent: Genetic Engineering • 1973-4: Paul Berg stopped his own experiment to insert carcinogenic virus DNA into E. coli; prominent scientists request a moratorium on recombinant DNA experiments • Asilomar conference (1975) set up guidelines: • Physical, biological containment; risk analysis Ban on disease/toxin organism manipulation • Credited with avoiding restrictive legislation • Industry compliance via FDA controls on sales

  19. Precedent: Genetic Engineering NIH Recombinant DNA Advisory Committee will not approve any protocol modifying human germline Cartagena Protocol (2003) governs trade in GMOs, enshrines precautionary principle 2010: Pres. Commission proposes federal oversight of synthetic biology activities 2012: 100+ NGOs call for worldwide moratorium

  20. The process is beginning MIRI, FHI, CSER, FLI, etc. AAAI task force (Horvitz & Selman) US Air Force: Test, Evaluation, Verification, and Validation for Autonomy

  21. The process is beginning MIRI, FHI, CSER, FLI, etc. AAAI task force (Horvitz & Selman) US Air Force: Test, Evaluation, Verification, and Validation for Autonomy Today’s meeting

  22. Meeting Schedule 1.00-1.30 Introductions (Russell) 1.30-1.50 Robots (Veloso) 1.50-2.10 Intelligence explosion (Dewey) 2.10-2.30 Unintended consequences (Shanahan) 2.30-2.50 Autonomous trading (Wellman) 2.50-3.10 Ontology, organizations (Mallah) 3.10-3.30 break 3.30-5.00 Technical research agenda 5.00-6.30 Organizational/socioeconomic responses 6.30-7.00 walk to reception 7.00-8.30 AAMAS reception, Sorbonne “Cordeliers” 8.30-10.30 Dinner Le Procope, 13 rue de l'AncienneComédie

  23. Technical research agenda • Verification/validation • Designing reward/utility functions • Iterated design/simulation/verification • Inductive specification from examples • Ensuring compliance • Theory of agents • Subsumption, composition, distribution, cooperation, transparency, etc. • Theory of non-agents • Pure question-answering systems • Meta-non-agents • Bootstrapping methods • Superintelligent self-verification • Superintelligent boxing • If you were me what utility function would you give yourself?

  24. Organizational/socioeconomic responses • One international, professional society • Keep tabs on worldwide effort levels etc. • Promulgate culture of responsibility, safety • Develop standards for risk analysis/verification • Keep decision makers and society well informed • Lobby for research funding • Legal standards for liability etc. • Conferences, conference tracks, journals

More Related