锚定效应、算法公平性与情感人工智能信息透明性的局限性 The Anchoring Effect, Algorithmic Fairness, and the Limits of Information

The Anchoring Effect, Algorithmic Fairness, and the Limits of Information Transparency for Emotion Artificial Intelligence

锚定效应、算法公平性与情感人工智能信息透明性的局限性

Abstract

Emotion artificial intelligence (AI) or emotion recognition AI may systematically vary in its recognition of facial expressions and emotions across demographic groups, creating inconsistencies and disparities in its scoring. This paper explores the extent to which individuals can compensate for these disparities and inconsistencies in emotion AI considering two opposing factors; although humans evolved to recognize emotions, particularly happiness, they are also subject to cognitive biases, such as the anchoring effect. To help understand these dynamics, this study tasks three commercially available emotion AIs and a group of human labelers to identify emotions from faces in two image data sets. The scores generated by emotion AI and human labelers are examined for inference inconsistencies (i.e., misalignment between facial expression and emotion label). The human labelers are also provided with the emotion AI’s scores and with measures of its scoring fairness (or lack thereof). We observe that even when human labelers are operating in this context of information transparency, they may still rely on the emotion AI’s scores, perpetuating its inconsistencies. Several findings emerge from this study. First, the anchoring effect appears to be moderated by the type of inference inconsistency and is weaker for easier emotion recognition tasks. Second, when human labelers are provided with information transparency regarding the emotion AI’s fairness, the effect is not uniform across emotions. Also, there is no evidence that information transparency leads to the selective anchoring necessary to offset emotion AI disparities; in fact, some evidence suggests that information transparency increases human inference inconsistencies. Lastly, the different models of emotion AI are highly inconsistent in their scores, raising doubts about emotion AI more generally. Collectively, these findings provide evidence of the potential limitations of addressing algorithmic bias through individual decisions, even when those individuals are supported with information transparency.

摘要

情感人工智能(AI)或情感识别AI在识别不同人口群体的面部表情和情感方面可能存在系统性的差异,导致评分上的不一致和差异。本文探讨了在考虑两个相对立的因素下,个体在多大程度上能够补偿这些差异和不一致性;尽管人类进化出了识别情感,特别是快乐的能力,但他们也会受到认知偏见的影响,比如锚定效应。为了理解这些动态,本研究要求三种商业化的情感AI和一组人工标注者从两个图像数据集中识别情感。对情感AI和人工标注者生成的评分进行了推理不一致性(即面部表情和情感标签之间的不匹配)检查。人工标注者还会获得情感AI的评分及其评分公平性的衡量标准(或缺乏公平性的证据)。我们观察到,即使在信息透明性的背景下,人工标注者仍可能依赖情感AI的评分,从而延续其不一致性。本研究得出以下几个主要发现。首先,锚定效应似乎受推理不一致类型的调节,对于较容易的情感识别任务,该效应较弱。其次,当人工标注者获得有关情感AI公平性的信息透明性时,该效应在不同情感之间并不统一。此外,没有证据表明信息透明性能导致必要的选择性锚定以抵消情感AI的差异;实际上,有一些证据表明,信息透明性增加了人类推理的不一致性。最后,不同的情感AI模型在评分上高度不一致,进一步对情感AI的普遍性提出了质疑。总体而言,这些发现为通过个体决策解决算法偏见的潜在局限性提供了证据,即使这些个体得到了信息透明性的支持。

History

Alessandro Acquisti, Senior Editor; Monideepa Tarafdar, Associate Editor.

历史

Alessandro Acquisti,高级编辑;Monideepa Tarafdar,副编辑。

Supplemental Material

The online appendix is available at https://doi.org/10.1287/isre.2019.0493.

补充材料

在线附录可在 https://doi.org/10.1287/isre.2019.0493 获得。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值