Seeing without knowing: Limitations of the transparency ideal and its application to algorithmi

本文探讨了透明度作为理想化的局限性,特别是在算法问责制中的应用。作者指出,透明度并不总是能带来理解和治理能力,反而可能与权力脱节,造成危害,甚至掩盖事实。文章列举了透明度的10个限制,并强调应通过跨系统视角来问责,而不是仅仅关注其内部运作。最后,提出了基于透明度局限性的算法治理新范式,以期发展出更合理的算法问责伦理。
摘要由CSDN通过智能技术生成

Article

Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability

new media & society 2018, Vol. 20(3) 973-989 © The Author(s) 2016 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav DOI: 10.1177/1461444816676645 journals.sagepub.com/home/nms

®SAGE


Mike Ananny

University of Southern California, USA

Kate Crawford

Microsoft Research New York City, USA; New York University, USA; MIT, USA

Abstract

Models for understanding and holding systems accountable have long rested upon ideals and logics of transparency. Being able to see a system is sometimes equated with being able to know how it works and govern it—a pattern that recurs in recent work about transparency and computational systems. But can “black boxes’ ever be opened, and if so, would that ever be sufficient? In this article, we critically interrogate the ideal of transparency, trace some of its roots in scientific and sociotechnical epistemological cultures, and present 10 limitations to its application. We specifically focus on the inadequacy of transparency for understanding and governing algorithmic systems and sketch an alternative typology of algorithmic accountability grounded in constructive engagements with the limitations of transparency ideals.

Keywords

Accountability, algorithms, critical infrastructure studies, platform governance, transparency

The observer must be included within the focus of observation, and what can be studied is always a relationship or an infinite regress of relationships. Never a “thing.”

Bateson (2000: 246)

Corresponding author:

Mike Ananny, University of Southern California, Los Angeles, CA 90089, USA.

Email: ananny@usc.edu

Introduction

Algorithmic decision-making is being embedded in more public systems—from transport to healthcare to policing—and with that has come greater demands for algorithmic transparency (Diakopoulos, 2016; Pasquale, 2015). But what kind of transparency is being demanded? Given the recent attention on transparency as a type of “accountability” in algorithmic systems, it is an important moment to consider what calls for transparency invoke: how has transparency as an ideal worked historically and technically within broader debates about information and accountability? How can approaches from Science and Technology Studies (STS) contribute to the transparency debate and help to avoid the historical pitfalls? And are we demanding too little when we ask to “look inside the black box”? In some contexts, transparency arguments come at the cost of a deeper engagement with the material and ideological realities of contemporary computation. Rather than focusing narrowly on technical issues associated with algorithmic transparency, we begin by reviewing a long history of the transparency ideal, where it has been found wanting, and how we might address those limitations.

Transparency, as an ideal, can be traced through many histories of practice. From philosophers concerned with the epistemological production of truth, through activists striving for government accountability, transparency has offered a way to see inside the truth of a system. The implicit assumption behind calls for transparency is that seeing a phenomenon creates opportunities and obligations to make it accountable and thus to change it. We suggest here that rather than privileging a type of accountability that needs to look inside systems, that we instead hold systems accountable by looking across them—seeing them as sociotechnical systems that do not contain complexity but enact complexity by connecting to and intertwining with assemblages of humans and non-humans.

To understand how transparency works as an ideal, and where it fails, we trace its history and present 10 significant limitations. We then discuss why transparency is an inadequate way to understand—much less govern—algorithms. We finish by suggesting an alternative typology of algorithmic governance: one grounded in recognizing and ameliorating the limits of transparency, with an eye toward developing an ethics of algorithmic accountability.

Transparency as an ideal

Transparency concerns are commonly driven by a certain chain of logic: observation produces insights which create the knowledge required to govern and hold systems accountable. This logic rests on an epistemological assumption that “truth is correspondence to, or with, a fact” (David, 2015: n.p.). The more facts revealed, the more truth that can be known through a logic of accumulation. Observation is understood as a diagnostic for ethical action, as observers with more access to the facts describing a system will be better able to judge whether a system is working as intended and what changes are required. The more that is known about a system’s inner workings, the more defensibly it can be governed and held accountable.

This chain of logic entails “a rejection of established representations” in order to realize “a dream of moving outside representation understood as bias and distortion” toward “representations [that] are more intrinsically true than others.” It lets observers “uncover the true essence" of a system (Christensen and Cheney, 2015: 77). The hope to “uncover” a singular truth was a hallmark of The Enlightenment, part of what Daston (1992: 607) calls the attempt to escape the idiosyncrasies of perspective: a “transcendence of individual viewpoints in deliberation and action [that] seemed a precondition for a just and harmonious society.”

Several historians point to early Enlightenment practices around scientific evidence and social engineering as sites where transparency first emerged in a modern form (Crary, 1990; Daston, 1992; Daston and Galison, 2007; Hood, 2006). Hood (2006) first sees transparency as an empirical ideal appearing in the epistemological foundations of “many eighteenth-century ideas about social science, that the social world should be made knowable by methods analogous to those used in the natural sciences” (p. 8). These early methods of transparency took different forms. In Sweden, press freedom first appeared with the country’s 1766 Tryckfrihetsforordningen law (“Freedom of the Press Act”) which gave publishers “rights of statutory access to government records.” In France, Nicholas de La Mare’s Traite de la Police volumes mapped Parisian street crime patterns to demonstrate a new kind of “police science”—a way to engineer social transparency so that “street lighting, open spaces with maximum exposure to public view, surveillance, records, and publication of information” would become “key tools of crime prevention” (Hood, 2006: 8). In the 1790s, Bentham saw “inspective architectures” as manifestations of the era’s new science of politics that would marry epistemology and ethics to show the “indisputable truth” that “the more strictly we are watched, the better we behave” (Cited in Hood (2006: 9-10). Such architectures of viewing and control were unevenly applied as transparency became a rationale for racial discrimination. A significant early example can be found in New York City’s 18th century “Lantern Laws,” which required “black, mixed-race, and indigenous slaves to carry small lamps, if in the streets after dark and unescorted by a white person.” Technologies of seeing and surveillance were inseparable from material architectures of domination as everything “from a candle flame to the white gaze” were used to identify who was in their rightful place and who required censure (Browne, 2015: 25).

Transparency is thus not simply “a precise end state in which everything is clear and apparent,” but a system of observing and knowing that promises a form of control. It includes an affective dimension, tied up with a fear of secrets, the feeling that seeing something may lead to control over it, and liberal democracy’s promise that openness ultimately creates security (Phillips, 2011). This autonomy-through-openness assumes that “information is easily discernible and legible; that audiences are competent, involved, and able to comprehend” the information made visible (Christensen and Cheney, 2015: 74)—and that they act to create the potential futures openness suggests are possible. In this way, transparency becomes performative: it does work, casts systems as knowable and, by articulating inner workings, it produces understanding (Mackenzie, 2008).

Indeed, the intertwined promises of openness, accountability and autonomy drove the creation of 20thcentury “domains of institutionalized disclosure” (Schudson, 2015: 7). These institutionalizations and cultural interventions appeared in the US Administrative Procedures Act of 1946, the 1966 Freedom of Information Act, the Sunshine Act, truth in packaging and lending legislation, nutritional labels, environmental impact reports and chemical disclosure rules, published hospital mortality rates, fiscal reporting regulations, and the Belmont Report on research ethics. And professions like medicine, law, advertising, and journalism all made self-regulatory moves to shore up their ethical codes, creating policies on transparency and accountability before the government did it for them (Birchall, 2011; Fung et al., 2008; Schudson, 2015).

The institutionalization of openness led to an explosion of academic research, policy interventions, and cultural commentary on what kind of transparency this produces— largely seeing openness as a verification tool to “regulate behavior and improve organizational and societal affairs” or as a performance of communication and interpretation that is far less certain about whether “more information generates better conduct” (Albu and Flyverbom, 2016: 14).

Policy and management scholars have identified three broad metaphors underlying organizational transparency: as a “public value embraced by society to counter corruption,” as a synonym for “open decision-making by governments and nonprofits,” and as a “complex tool of good governance”—all designed to create systems for “accountability, efficiency, and effectiveness” (Ball, 2009: 293). Typologies of transparency have emerged:

  • •  “Fuzzy” transparency (offering “information that does not reveal how institutions actually behave in practice ... that is divulged only nominally, or which is revealed but turns out to be unreliable”) versus “clear” transparency (“programmes that reveal reliable information about institutional performance, specifying officials’ responsibilities as well as where public funds go”) (Fox, 2007: 667);

  • •  Transparency that creates “soft” accountability (in which organizations must answer for their actions) versus “hard” accountability (in which transparency brings the power to sanction and demand compensation for harms) (Fox, 2007);

  • •  Transparency “upwards” (“the hierarchical superior/principal can observe the conduct, behavior, and/or ‘results’ of the hierarchical subordinate/

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值