Journal of Northeastern University(Social Science) ›› 2020, Vol. 22 ›› Issue (3): 14-20.DOI: 10.15936/j.cnki.1008-3758.2020.03.003

• Scientific and Technological Philosophy • Previous Articles     Next Articles

Ethical Implications of Artificial Agents' “Intending Not to Do”

WANG Shu-qing   

  1. (College of Public Administration, Hunan Normal University, Changsha 410081, China)
  • Received:2019-07-09 Revised:2019-07-09 Online:2020-05-25 Published:2020-05-25
  • Contact: -
  • About author:-
  • Supported by:
    -

Abstract: With the development of AI, artificial agents have been expected to own the partial ability to distinguish between “morality” and “non-morality”, in particular not to perform immoral actions by themselves. Based on the functionalism of moral agency, it was discussed in what sense artificial moral agents are made possible. Similar to the mankind's negative actions, artificial agents should also be considered for some moral reasons when they intentionally refrain or ignore doing something. Therefore, artificial agents should possess two moral characteristics; namely, strong sensitivity to possible harms and weak autonomy in moral decision-making. Finally, from the perspective of cybernetics, the ways to embed these two characteristics into artificial agents are envisaged, and it is proved that “intentional negligence” is a more difficult problem for the ethical control of autonomous artificial agents.

Key words: artificial moral agent, intending not to do, strong sensibility, weak autonomy, ethical control

CLC Number: