The ethics of interaction with neurorobotic agents: a case study with BabyX
Autor: | Mark Sagar, Martin Takac, Alistair Knott |
---|---|
Rok vydání: | 2021 |
Předmět: |
Warrant
Philosophy of mind Mechanical Engineering 05 social sciences Perspective (graphical) Energy Engineering and Power Technology 06 humanities and the arts Management Science and Operations Research 0603 philosophy ethics and religion 050105 experimental psychology Epistemology Harm Argument 0501 psychology and cognitive sciences 060301 applied ethics Set (psychology) Relation (history of concept) Psychology Simple (philosophy) |
Zdroj: | AI and Ethics. 2:115-128 |
ISSN: | 2730-5961 2730-5953 |
DOI: | 10.1007/s43681-021-00076-x |
Popis: | As AI advances, models of simulated humans are becoming increasingly realistic. A new debate has arisen about the ethics of interacting with these realistic agents—and in particular, whether any harms arise from ‘mistreatment’ of such agents. In this paper, we advance this debate by discussing a model we have developed (‘BabyX’), which simulates a human infant. The model produces realistic behaviours—and it does so using a schematic model of certain human brain mechanisms. We first consider harms that may arise due to effects on the user—in particular effects on the user’s behaviour towards real babies. We then consider whether there’s any need to consider harms from the ‘perspective’ of the simulated baby. The first topic raises practical ethical questions, many of which are empirical in nature. We argue the potential for harm is real enough to warrant restrictions on the use of BabyX. The second topic raises a very different set of questions in the philosophy of mind. Here, we argue that BabyX’s biologically inspired model of emotions raises important moral questions, and places BabyX in a different category from avatars whose emotional behaviours are ‘faked’ by simple rules. This argument counters John Danaher’s recently proposed ‘moral behaviourism’. We conclude that the developers of simulated humans have useful contributions to make to debates about moral patiency—and also have certain new responsibilities in relation to the simulations they build. |
Databáze: | OpenAIRE |
Externí odkaz: |