A Qualitative Exploration of Perceptions of Algorithmic Fairness

CHI 2018 Paper

CHI 2018, April 21-26, 2018, Montreal, QC, Canada

A Qualitative Exploration of Perceptions of Algorithmic Fairness

Allison Woodruff1, Sarah E. Fox2, Steven Rousso-Schindler3, and Jeff Warshaw4

ABSTRACT

Algorithmic systems increasingly shape information people are exposed to as well as influence decisions about employment, finances, and other opportunities. In some cases, algorithmic systems may be more or less favorable to certain groups or individuals, sparking substantial discussion of algorithmic fairness in public policy circles, academia, and the press. We broaden this discussion by exploring how members of potentially affected communities feel about algorithmic fairness. We conducted workshops and interviews with 44 participants from several populations traditionally marginalized by categories of race or class in the United States. While the concept of algorithmic fairness was largely unfamiliar, learning about algorithmic (un)fairness elicited negative feelings that connect to current national discussions about racial injustice and economic inequality. In addition to their concerns about potential harms to themselves and society, participants also indicated that algorithmic fairness (or lack thereof) could substantially affect their trust in a company or product.

Author Keywords

Algorithmic fairness; algorithmic discrimination

ACM Classification Keywords

K.4.m. Computers and Society: Miscellaneous.

INTRODUCTION

Scholars and thought leaders have observed the increasing role and influence of algorithms in society, pointing out that they mediate our perception and knowledge of the world as well as affect our chances and opportunities in life [6,8,17,38,54,55,63,76,79]. Further, academics   and

regulators have long refuted the presumption that algorithms are wholly objective, observing that algorithms can reflect or amplify human or structural bias, or introduce complex biases of their own [4,10,18,33-35,38,46,64]. To

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs International 4.0 License.

CHI 2018, April 21—26, 2018, Montreal, QC, Canada.

© 2018 Copyright is held by the owner/author(s).

ACM ISBN 978-1-4503-5620-6/18/04.

A Qualitative Exploration of Perceptions of Algorithmic Fairness | Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems raise awareness and illustrate the potential for wide-ranging consequences, researchers and the press have pointed out a number of specific instances of algorithmic unfairness [19,58], for example, in predictive policing [19,43], the online housing marketplace [27,28], online ads [13,17,20,29,82], and image search results [49,64].

Such cases demonstrate that algorithmic (un)fairness is a complex, industry-wide issue. Bias can result from many causes, for example, data sets that reflect structural bias in society, human prejudice, product decisions that disadvantage certain populations, or unintended consequences of complicated interactions among multiple technical systems. Accordingly, many players in the ecosystem, including but not limited to policy makers, companies, advocates, and researchers, have a shared responsibility and opportunity to pursue fairness. Algorithmic fairness, therefore, appears to be a “wicked problem” [72], with diverse stakeholders but, as yet, no clear agreement on problem statement or solution. The human computer interaction (HCI) community and related disciplines are of course highly interested in influencing positive action on such issues [25], having for example an established tradition of conducting research to inform public policy for societal-scale challenges [50,84] as well as providing companies information about how they can best serve their users. Indeed, recent work by Plane et al. on discrimination in online advertising is positioned as informing public policy as well as company initiatives [67].

Building on this tradition, our goal in this research was to explore ethical and pragmatic aspects of public perception of algorithmic fairness. To this end, we conducted a qualitative study with several populations that have traditionally been marginalized and are likely to be affected by algorithmic (un)fairness, specifically, Black or African American, Hispanic or Latinx, and low socioeconomic status participants in the United States. Our research questions centered around participants’ interpretations and experiences of algorithmic (un)fairness, as well as their ascription of accountability and their ethical and pragmatic expectations of stakeholders. In order to draw more robust conclusions about how participants interpret these highly contextual issues, we explored a broad spectrum of different types of algorithmic unfairness, using scenarios to make the discussion concrete.

Our findings indicate that while the concept of algorithmic (un)fairness was initially mostly unfamiliar and participants often perceived algorithmic systems as having limited impact, they were still deeply concerned about algorithmic unfairness, they often expected companies to address it regardless of its source, and a company’s response to algorithmic unfairness could substantially impact user trust. These findings can inform a variety of stakeholders, from policy makers to corporations, and they bolster the widely espoused notion that algorithmic fairness is a societally important goal for stakeholders across the ecosystem—from regulator to industry practitioner—to pursue. With full recognition of the importance of ethical motivations, these findings also suggest that algorithmic fairness can be good business practice. Some readers may be in search of arguments to motivate or persuade companies to take steps to improve algorithmic fairness. There are many good reasons for companies to care about fairness, including but not limited to ethical and moral imperatives, legal requirements, regulatory risk, and public relations and brand risk. In this paper, we provide additional motivation by illustrating that user trust is an important but understudied pragmatic incentive for companies across the technology sector to pursue algorithmic fairness. Based on our findings, we outline three best practices for pursuing algorithmic fairness.

BACKGROUND

Algorithmic Fairness

In taking up algorithmic fairness, we draw on and seek to extend emerging strands of thought within the fields of science and technology studies (STS), HCI, mathematics, and related disciplines. Research on algorithmic fairness encompasses a wide range of issues, for example, in some cases considering discrete decisions and their impact on individuals (e.g. fair division algorithms explored in [51,52]), and in other cases exploring broader patterns related to groups that have traditionally been marginalized in society. Our focus tends towards the latter, and of particular relevance to our investigation is the perspective taken in critical algorithm studies, which articulates the increasing influence of algorithms in society and largely focuses on understanding algorithms as an object of social concern [6,17,38,54,55,63,76,79]. Countering popular claims that algorithmic authority or data-driven decisions may lead to increased objectivity, many scholars have observed that algorithms can reflect, amplify or introduce bias [4,10,18,33-35,38,46,64].

Articles in academic venues as well as the popular press have chronicled specific instances of unjust or prejudicial treatment of people, based on categories like race, sexual orientation, or gender, through algorithmic systems or algorithmically aided decision-making. For example, Perez reported that Microsoft’s Tay (an artificial intelligence chatbot) suffered a coordinated attack that led it to exhibit racist behavior [65]. Researchers have also reported that image search or predictive search results may reinforce or exaggerate societal bias or negative stereotypes related to race, gender, or sexual orientation [4,49,62,64]. Others raised concerns about potential use of Facebook activity to compute non-regulated credit scores, especially as this may disproportionately disadvantage less privileged populations [17,82]. Edelman et al. ran experiments on Airbnb and reported that applications from guests with distinctively African American names were 16% less likely to be accepted relative to identical guests with distinctively White names [28]. Edelman and Luca also found non-Black hosts were able to charge approximately 12% more than Black hosts, holding location, rental characteristics, and quality constant [27]. Colley et al. found Pokemon GO advantaged urban, white, non-Hispanic populations, for example, potentially attracting more tourist commerce to their neighborhoods [15], and Johnson et al. found that geolocation inference algorithms exhibited substantially worse performance for underrepresented populations, i.e., rural users [47].

This public awareness has been accompanied by increased legal and regulatory attention. For example, the upcoming European Union General Data Protection Regulation contains an article on ‘automated individual decisionmaking’ [39]. Yet, algorithmic fairness poses many legal complexities and challenges [5] and law and regulation are still in nascent stages in this rapidly changing field (e.g. [9]). To investigate systems’ adherence to emerging legal, regulatory, and ethical standards of algorithmic fairness, both testing and transparency have been called for

  • [1.14.77] . A wide range of techniques have been proposed to scrutinize algorithms, such as model interpretability, audits, expert analysis, and reverse engineering

  • [22.42.76.77] . Investigation is complicated however by the myriad potential causes of unfairness (prejudice, structural bias, choice of training data, complex interactions of human behavior with machine learning models, unforeseen supply and demand effects of online bidding processes, etc.) and the sometimes impenetrable and opaque nature of machine learning systems [12,38]. In fact, existing offline discrimination problems may in some cases be exacerbated and harder to investigate once they manifest in online systems [77], and new bigotries based not just on immutable characteristics but more subtle features may arise which are more difficult to detect than traditional discriminatory processes [9].

Not only do opacity and complexity complicate expert analysis, but they may also make it difficult for stakeholders to understand the consequences of algorithmic systems. Many of the proposed mechanisms for scrutinizing algorithms make certain assumptions about the public, regulators, and other stakeholders. However, research has found that perception of algorithmic systems can vary substantially by individual factors as well as platform [21], and that end users often have fundamental questions or misconceptions about technical details of their operation

  • [11.31.69.85.86] , an effect that may be exacerbated for less privileged populations [86]. For example, studies have found that some participants are not aware of algorithmic curation in the Facebook News Feed [31,69] or the gathering of online behavioral data and its use for inferencing [86], or underestimate the prevalence and scal

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值