东北大学学报(社会科学版) ›› 2022, Vol. 24 ›› Issue (1): 1-9.DOI: 10.15936/j.cnki.1008-3758.2022.01.001

• 科技哲学研究 •    下一篇

从算法偏见到算法歧视:算法歧视的责任问题探究

孟令宇   

  1. (复旦大学哲学学院,上海200433)
  • 发布日期:2022-02-23
  • 通讯作者: 孟令宇
  • 作者简介:孟令宇(1992-),男,北京人,复旦大学博士研究生,主要从事科技伦理研究。
  • 基金资助:
    -

From Algorithm Bias to Algorithm Discrimination: Research on the Responsibility of Algorithmic Discrimination

MENG Lingyu   

  1. (School of Philosophy, Fudan University, Shanghai 200433, China)
  • Published:2022-02-23
  • Contact: -
  • About author:-
  • Supported by:
    -

摘要: 算法的公正问题被视作如今人工智能领域一个核心的伦理问题,通常被表述为“算法偏见”或“算法歧视”,但偏见与歧视实际上是两个问题。区分这两个问题可以显示出算法偏见并不具有伦理维度,而算法歧视才是人工智能领域中真正的核心伦理问题。算法偏见是不可避免的,主要来自开发者的偏见、数据的偏见和算法自身的偏见。其中,显性偏见是容易被发现并剔除的,但隐性偏见则不可避免地存在于算法之中。算法歧视的责任主体则主要是人,人们对于算法偏见的盲从导致了算法歧视。自主决策算法所引发的歧视也可以追责到人。因此,算法的开发者和使用者要为算法歧视负责。

关键词: 算法歧视; 算法偏见; 深度学习; 人工智能

Abstract: Algorithmic justice is now seen as a core ethical issue in the field of artificial intelligence, and is often expressed as algorithmic bias or algorithmic discrimination. But algorithm bias and algorithm discrimination are actually two problems. Distinguishing these two problems can show that algorithm bias does not have an ethical dimension, and algorithm discrimination is the real core ethical problem in the field of artificial intelligence. Algorithm bias is inevitable and it mainly comes from the bias of developers, the bias of data and the bias of algorithm itself. Among them, explicit bias is easy to be found and eliminated, but implicit bias is inevitable in the algorithm. The subject mainly responsible for algorithmic discrimination is human being, whose blind obedience to algorithmic bias leads to algorithmic discrimination. The discrimination caused by autonomous decision-making algorithm can also be traced to people. Therefore, algorithm developers and users should be responsible for algorithm discrimination.

Key words: algorithm discrimination; algorithm bias; deep learning; artificial intelligence

中图分类号: