东北大学学报(社会科学版) ›› 2020, Vol. 22 ›› Issue (3): 14-20.DOI: 10.15936/j.cnki.1008-3758.2020.03.003

• 科技哲学研究 • 上一篇    下一篇

人工智能体“有意不为”的伦理意蕴

王淑庆   

  1. (湖南师范大学公共管理学院,湖南长沙410081)
  • 收稿日期:2019-07-09 修回日期:2019-07-09 出版日期:2020-05-25 发布日期:2020-05-25
  • 通讯作者: 王淑庆
  • 作者简介:王淑庆(1986-),男,湖南耒阳人,湖南师范大学讲师,哲学博士,主要从事人工智能伦理、行动哲学研究。
  • 基金资助:
    国家社会科学基金重大资助项目(17ZDA023)。

Ethical Implications of Artificial Agents' “Intending Not to Do”

WANG Shu-qing   

  1. (College of Public Administration, Hunan Normal University, Changsha 410081, China)
  • Received:2019-07-09 Revised:2019-07-09 Online:2020-05-25 Published:2020-05-25
  • Contact: -
  • About author:-
  • Supported by:
    -

摘要: 人工智能的发展使得人们期待人工智能体具有明辨道德是非的部分能力,特别是要求人工智能体能够主动地不做违背道德的行动。基于道德能动性的功能主义,说明人工道德主体在何种意义上是可能的。与人类有意不做某事类似,人工智能体在有意抑制或忽略做某事的时候,也应该出于道德理由考量。由此,人工智能体在“有意不为”上应当具备两种道德特性,即对可能伤害的强敏感性和对道德决策的弱自主性。最后,从控制论的角度设想如何将这两种特性嵌入到人工智能体中去,并论证对于自主人工智能体的伦理控制来说,“有意忽略”是更为困难的问题。

关键词: 人工道德智能体, 有意不为, 强敏感性, 弱自主性, 伦理控制

Abstract: With the development of AI, artificial agents have been expected to own the partial ability to distinguish between “morality” and “non-morality”, in particular not to perform immoral actions by themselves. Based on the functionalism of moral agency, it was discussed in what sense artificial moral agents are made possible. Similar to the mankind's negative actions, artificial agents should also be considered for some moral reasons when they intentionally refrain or ignore doing something. Therefore, artificial agents should possess two moral characteristics; namely, strong sensitivity to possible harms and weak autonomy in moral decision-making. Finally, from the perspective of cybernetics, the ways to embed these two characteristics into artificial agents are envisaged, and it is proved that “intentional negligence” is a more difficult problem for the ethical control of autonomous artificial agents.

Key words: artificial moral agent, intending not to do, strong sensibility, weak autonomy, ethical control

中图分类号: