The Ethical Status of Artificial Agents – With and Without Consciousness E-Intentionality 9/11/06. Steve Torrance Middlesex University, UK and University of Sussex, UK [email protected]
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
… our interaction with them;
… and our ethical relation to them.
Creating machines which perform in ways which require X when humans perform in those ways…
Artificial Consciousness (AC):
creating machines which perform in ways which require consciousness when humans perform in those ways (?)
Where is the psychological reality of consciousness in this?
‘functional’ versus ‘phenomenal’ consciousness?
Psychologically real versus just simulated artificial consciousness…
Not to deny that debates like the Chinese Room have aroused strong passions over many years…
(unlike working in AI?)
… puts special ethical responsibilities on shoulders of researchers
we, as developers and users of technologies,
…ought to use such technologies to best meet
our existing ethical ends,
within existing ethical frameworks
(The latter may overlap with the former)
BENZ 3-WHEELER 1.7 LITRES (GERMANY, 1885)
CAR WRECK (USA, 2005)
Instrumental versus intrinsic stance
This is one illustration of the move from ‘conventional’ techno-ethics and artificial ethics
What is the relation between AE and AC?
(totality of moral agents)
(totality of moral agents)
?? Could non-conscious artificial agents
have genuine moral status …
(having moral claims on us)
(b) As moral producers?
(having moral responsibilities towards us (and themselves))
(b) as genuine moral producers
So, on the ‘strong’ view, non-conscious AAs will have no real ethical status
(i.e. as having genuine moral responsibilities)
But it may be necessary for an AA to be considered
(b) as a genuine moral consumer
(i.e. as having genuine moral claims on the moral community)
– even if not moral ‘responsibility’ in
a full-blooded sense
(*i.e. this kind of moral status may attach to such agents quite independently of their status as conscious agents)
These seem to require presence of psychologically real consciousness
To perform ‘outwardly’ to ethical standards of conduct
This creates an urgent and very challenging programme of research for now…
developing appropriate ‘shallow’ ethical simulations…
Automated ‘moral pilot’ systems?
See Blay Whitby, “Computing Machinery and Morality”
submitted, AI and Society
become highly prized collectors' items
… does not depend on our attributing consciousness to him …
… do not depend on our attributing consciousness to him …
5. Questions can be raised about the strong view
- (automated ethical advisors; property ownership)
6. There are many important ways in which a kind of (shallow) ethics has to be developed for present day and future non-conscious agents.
7. But in an ultimate, ‘deep’ sense, perhaps AC and AE go together closely
(NB: In my paper ‘Ethics and Consciousness in Artificial Agents’
I defend the strong view much more robustly, as the ‘organic’ view.)