wisdom does imply benevolence n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Wisdom DOES Imply Benevolence PowerPoint Presentation
Download Presentation
Wisdom DOES Imply Benevolence

Loading in 2 Seconds...

play fullscreen
1 / 27

Wisdom DOES Imply Benevolence - PowerPoint PPT Presentation


  • 107 Views
  • Uploaded on

Wisdom DOES Imply Benevolence. Mark R. Waser. Super-Intelligence  Ethics. (except in a very small number of low-probability edge cases). So . . . What’s the problem?. Superintelligence does not imply benevolence.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Wisdom DOES Imply Benevolence' - kurt


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
super intelligence ethics
Super-Intelligence  Ethics

(except in a very small number of low-probability edge cases)

So . . . What’s the problem?

superintelligence does not imply benevolence
Superintelligence does not imply benevolence

Fox, J. & Shulman C. (2010) Superintelligence Does Not Imply Benevolence. In K. Mainzer (ed.), ECAP10: VIII European Conference on Computing and Philosophy (pp. 456-462) Munich: Verlag.

slide4

If machines become more intelligent than humans, will their intelligence lead them toward beneficial behavior toward humans even without specific efforts to design moral machines?

references
References
  • Evolution of reciprocal altruism (Trivers 1971)
  • Increase in scope of cooperation (Wright 2000)
  • Reduction in rates of violence (Pinker 2007)
  • Expanding circle of moral concern (Singer 1981)
  • D. Gauthier
  • J. Haidt
  • S. Omohundro
slide6

One might generalize from this trend and argue that as machines approach and exceed human cognitive capacities, moral behavior will improve in tandem.

ceteris paribus other things being equal
Ceteris Paribus(other things being equal)
  • intelligence

can be far less important than

  • goal system properties & content

in determining benevolence vs. malevolence

intelligence – the ability to achieve goals in a wide range of environments.

for example
For example,

If an intelligence has the single goal to *destroy humanity*,increased intelligence will only make it more malevolent

the human motivational system is opaque messy and conflicted but most importantly transient
The human motivational system is opaque, messy, and conflicted,but most importantly transient!

The primary danger of AIs is entirely due to the fact that their goal system *could* be different

friendly ai yudkowsky 2001
“Friendly AI” (Yudkowsky 2001)

An artificial intelligence with a cleanly hierarchical goal system with a single top-level (monomaniacal) goal of “Friendliness” (to humans)

Imagine a “Friendly AI” where Friendliness has been defined (hopefully accidentally) as *DESTROY HUMANITY*

wisdom
Wisdom

The goal/motivation to achieve maximal goals in terms of number and diversity.

  • Avoids “lock-in” and short-sighted over-optimization of goals/utility functions (smoking)
  • Avoids undesirable endgame strategies (prisoner’s dilemma)
  • Promotes avoiding unnecessary actions that preclude reachable goals including wasting resources and alienating or destroying potential cooperators (waste not, want not)
two conceptions of morality
Two conceptions of morality

This picture neglects a critical distinction between

1. A system for cooperation

Advances one’s own ends

AIs will out-cooperate humans (Hall 2007)

2. A system to protect the weak/helpless

Demands revision of our ultimate ends

Will AIs revise their preferences to be more moral (Chalmers 2010)?

slide13
Paths from intelligence to moral behavior (ways in which increased intelligence might prompt behavior favorable to humans)

1. noticing direct instrumental motivations

Advances one’s own ends (transient)

2. noticing instrumental benefits to enduring benevolent dispositions/trustworthiness

Advances one’s own ends (permanent?)

3. causing an intrinsic desire for human welfare independent of instrumental concerns

Revision of ends/desires (maybe?)

slide14

If you have a verifiable history of being trustworthy when not forced, others do not have to commit resources to defending against you – and can pass some of those savings on to you

On the other hand, if you harm (or worse, destroy) interesting or useful entities, more powerful entities will likely decide that *you* need to spend resources as reparations and altruistic punishment (as well as paying the cost of enforcement)

basic ai drives
Basic AI Drives

Instrumental Goals

Steve Omohundro,

Proceedings of the First AGI Conference, 2008

1. AIs will want to self-improve

2. AIs will want to be rational

3. AIs will try to preserve their utility

4. AIs will try to prevent counterfeit utility

5. AIs will be self-protective

6. AIs will want to acquire resources and use them efficiently

slide16

Cooperation is an instrumental goal!

“Without explicit goals to the contrary, AIs are likely to behave like human sociopathsin their pursuit of resources.”

Any sufficiently advanced intelligence (i.e. one with even merely adequate foresight) is guaranteed to realize and take into account the fact that not asking for help and not being concerned about others will generally only work for a brief period of time before ‘the villagers start gathering pitchforks and torches.’

Everything is easier with help & without interference

goal systems morality and david hume s is ought divide
Goal Systems, Morality, and David Hume’s Is-Ought Divide

In every system of morality, which I have hitherto met with, I have always remark'd, that the author proceeds for some time in the ordinary ways of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when all of a sudden I am surpriz'd to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, 'tis necessary that it shou'd be observ'd and explain'd; and at the same time that a reason should be given; for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it.

ought
Ought
  • Requires a goal or desire (or, more correctly, multiples thereof)
  • IS the set of actions most likely to fulfill those goals/desires
  • For the sum of all goals converges to a universal morality

a superset of

^

moral systems are

interlocking sets of values, virtues, norms, practices, identities, institutions, technologies, and evolved psychological mechanisms

Moral Systems Are . . .

that work together to

suppress or regulate selfishness

and

make cooperative social life possible.

Haidt & Kesebir,

Handbook of Social Psychology, 5th Ed. 2010

are values dependent upon intelligence
Are values dependent upon intelligence?

Humean view – values are entirely independent of intelligence

Kantian view – many extremely intelligent beings would converge on (possibly benevolent) substantive normative principles upon reflection

arguments pro con
Arguments Pro & Con
  • Against Kantian – AIXI has no room to move from reason to values
  • Against Kantian – Humean design is a stable equilibrium unless the utility function is self-referential
  • Pro Kantian – Humans change our goals under reflection and “often acquire intrinsic preferences for correlates of instrumentally useful actions”.
quick answer
Quick Answer
  • Values are dependent upon goals
  • Values are dependent upon instrumental goals as long as they do not conflict with primary goals
  • Intelligence allows you to see this and take advantage of it, so . . . . YES!

EXAMPLE: Waste not, want not.

thought experiment
Thought Experiment

How would a super-intelligence behave if it knew that it had a goal but that it wouldn’t know that goal until sometime in the future?

Preserving that weak entity may be that goal

Or it might have necessary knowledge/skills

reprise three views of wisdom
Reprise: Three Views of Wisdom
  • Waste not, want not
  • Block as few goals as possible, particularly Omohundro drives
  • Fulfill as many goals as possible
power
Power
  • Many of those concerned about intelligent machines appear obsessed with power levels
  • Yet, interestingly enough, power is notable in *NOT* being on Omohundro’s list ( i.e. a true instrumental goal
  • Will greater intelligence eschew power for efficiency (in diversity)?
an alternate view of intelligence
An Alternate View of Intelligence
  • Greater cognitive resources leads to marked improvements in prediction and reductions in time discounting
  • Leads to moving planning horizons out and moving from short-term REQUIREMENTS to long-term optimality
  • Indeed, a truly intelligent entity should never be caught in a situation where . . . . (unless out-thought by an even greater intelligence)
self interest vs ethics
Higher personal utility (in the short term only)

More options to choose (in the short term only)

Less restrictions

Higher global utility

Less risk (if caught)

Lower cognitive cost (fewer options, no need to track lies, etc.)

Assistance & protection when needed/desired

“Self-Interest” vs. Ethics