Governing Algorithms: Myth, Mess, and Methods

Special Issue Introduction

Science, Technology, & Human Values 2016, Vol. 41(1) 3-16 © The Author(s) 2015 Reprints and permission: sagePUta<noimjournHlsPerimissioins.nav DOI: 10., 1177/4016220915608948 sthv.sagepub.com

®SAGE


Governing Algorithms: Myth, Mess, and Methods

Malte ZiewiMz1

Abstract

Algorithms have developed into somewhat of a modern myth. On the one hand, they have been depicted as powerful entities that rule, sort, govern, shape, or otherwise control our lives. On the other hand, their alleged obscurity and inscrutability make it difficult to understand what exactly is at stake. What sustains their image as powerful yet inscrutable entities? And how to think about the politics and governance of something that is so difficult to grasp? This editorial essay provides a critical backdrop for the special issue, treating algorithms not only as computational artifacts but also as sensitizing devices that can help us rethink some entrenched assumptions about agency, transparency, and normativity.

Keywords

algorithms, governance, myth, methods, politics

After another long day of talks, discussions, and coffee breaks, the international conference on “Governing Algorithms” at New York University is

’Department of Science & Technology Studies, Cornell University, Ithaca, NY, USA

Corresponding Author:

Malte Ziewitz, Department of Science & Technology Studies, Cornell University, 330 Rockefeller Hall, Ithaca, NY 14853, USA.

Email: mcz35@cornell.edu coming to an end.1 For two days, scholars from a variety of disciplines and backgrounds have pondered the role of algorithms in public life. Sociologists, computer scientists, political scientists, legal scholars, anthropologists, designers, philosophers, and many more have shared ideas, concerns, empirical materials and puzzled over new forms of computation, automation, and control. As the last participants line up behind the microphone, a young man steps up and asks the question: “So we have been talking about algorithms for two days now, and it has all been extremely interesting. Still, I’m wondering: what actually is an algorithm?”

The question is a good one. If recent writing is to be believed, algorithms “have the capacity to shape social and cultural formations and impact directly on individual lives” (Beer 2009, 994), figure as “pathways through which capitalist power works” (Lash 2007, 71), denote “rules of rationality [that] replaced the self-critical judgments of reasons” (Daston 2012), serve as “an interpretative key of modernity” (Totaro and Ninno 2014, 30), or “acquire the sensibility of truth” (Slavin 2011). In the “age of the algorithm” (Kelty 2003), users, activists, and policy makers are increasingly concerned “that individual autonomy is lost in an impenetrable set of algorithms” (Executive Office of the President 2014, 10). “Algorithmic regulation” (O’Reilly 2013), as it has been called, has triggered a variety of concerns about accountability, fairness, bias, autonomy, and due process—exacerbated by the widely bemoaned opacity and inscrutability of computational systems. Against this backdrop, claims about governing algorithms deserve some scrutiny and skepticism: how to account for the recent rise of algorithms as both a topic and a resource for understanding a wide range of activities? What challenges do algorithms pose for scholars in science and technology studies (STS) and in the sociology, history, and anthropology of science and technology? And, yes, what actually is an algorithm?

In this editorial essay, I shall open up these questions and provide a critical backdrop against which the contributions to this special issue can unfold. After a brief review of the “algorithmic drama” that characterizes much recent engagement with the politics of algorithms, I highlight an increasing body of work that has begun to investigate and trouble the coherence of the algorithm as an analytic category. Algorithms, it is suggested, do not just work as straightforward tools or mythologies to be debunked but as a sensitizing concept that can help us rethink entrenched assumptions about the politics of computation, automation, and control.

An Algorithmic Drama

Much recent talk about the role of algorithms in public life tends to follow the script of a seductive drama. Consider the following paragraph from a recent study on algorithms and accountability:

We are now living in a world where algorithms, and the data that feed them, adjudicate a large array off decisions in our lives: not just search engines and personalized online news systems, but educational evaluations, the operation of markets and political campaigns, the design of urban public spaces, and even how social services like welfare and public safety are managed. But algorithms can arguably make mistakes and operate with biases. The opacity of technically complex algorithms operating at scale makes them difficult to scrutinize, leading to a lack ofclarity for the public in terms ofhow they exercise their power and influence. (Diakopoulos 2015, 398)

The paragraph, which could well have opened this special issue, provides a useful illustration of the structure of concerns that have been shaping recent thinking about the politics of algorithms. Specifically, the algorithmic drama comes in two distinctive acts.

The first act introduces algorithms as powerful and consequential actors in a wide variety of domains. The example above refers to search engines, online news, education, markets, political campaigns, urban planning, welfare, and public safety. Others add stock prices, espionage, movie ratings, medical diagnosis, music recommendations, and gambling (Steiner 2012, 7). Such heterogeneous lists have become a common feature of accounts of algorithms, suggesting an almost universal “relevance of algorithms” (Gillespie 2013). These algorithms are further imbued with agency and impact. Occupying the position of subject at the beginning of sentences, algorithms may be said to “adjudicate,” “make mistakes,” “operate with biases,” or “exercise their power and influence.”2 This focus on algorithms as agential is in line with earlier work on the politics of computer code, neatly expressed in the popular slogan “code is law” (Lessig 1999; see also Reidenberg 1997). Yet, while these studies were concerned with more or less static constraints on individual freedom, algorithms add a layer of more subtle and seemingly autonomous activity, reminiscent of earlier work on artificial intelligence (cf. Collins 1992). It is therefore not surprising to see a wide range of concerns specifically about algorithms, including bias (Introna and Nissenbaum 2000; Bozdag 2013), discrimination (Gillespie 2013; Kraemer, Overveld, and Peterson 2010), fairness (Dwork et al. 2011), the distribution of visibility (Bucher 2012), surveillance (Introna and Wood 2002), and accountability (Felten 2012; Schuppli 2014).

The second act then picks up on the difficulties involved in explicating “how they exercise their power and influence” (Diakopoulos 2015, 398). An algorithm, it turns out, is rather difficult to understand—“a complex and mathematically grounded sociomaterial black box that seems to do far more than simply aggregate preferences” (C. W. Anderson 2011, 540). In a “black box society” (Pasquale 2015), knowing algorithms and their implications becomes an important methodological and political concern. Specifically, some have called for “a concerted multidisciplinary effort to try and open up the ‘black boxes’ that trap software-sorting” (Graham 2005, 575). The range of remedies proposed to achieve such understanding and transparency includes disclosure (Introna and Nissenbaum 2000), reverse engineering (Diakopoulos 2013), value-centered design (Knobel and Bowker 2011), educational initiatives (Davidson 2011), or audit (Sandvig et al. 2014). Interestingly, there is a certain recursiveness to this drama: opacity of operation tends to be read as another sign of influence and power.

At this point, we can only speculate about what makes this drama so compelling. As Adrian Mackenzie noted, algorithms have “a cognitive-affective stickiness that makes them both readily imitable and yet forbiddingly complex” (2006, 43). There also is a striking similarity between algorithms and the language of politics, which tends to privilege the figure of the lone decision maker at the expense of more complex realities (cf. Star 1991). In fact, it could be argued that what makes this drama both intuitive and puzzling is exactly the induction of a new entity into a more or less ready-made system of conventional politics with its established cast of actors and long-standing concerns about agency, transparency, and normativity. Making the figure of the algorithm tractable through the template of this system is convenient for articulating a range of problems and concerns. However, rather than answering the initial question of what an algorithm actually is, it creates the very urge to ask it in the first place. At this point at least, the answer would be a rather disappointing “we don’t know, but it surely is very powerful.”

From Myth to Mess

The recent surge of interest in the role of algorithms in public life has led some commentators to suggest that we do not live in an “algorithmic culture so much as a computational theocracy” (Bogost 2015). In fact, the drama of the powerful yet inscrutable algorithm bears some resemblance to long-standing mythologies, such as Adam Smith’s “invisible hand” or Charles Darwin’s “natural selection” (cf. Kennedy 2009). Algorithms, it seems, fit in seamlessly with this line of stubbornly seductive stories about the origins of order. Pioneered by work in software studies (Fuller 2008; Galloway 2006; Kitchin and Dodge 2014; A. Mackenzie 2006; Manovich 2002), it is therefore not surprising to see an increasing number of sociologists, historians, and anthropologists of science and technology take on the figure of the algorithm and process it according to their own devices. To illustrate this, let us have a brief look at how a small selection of work addresses three analytic themes that figure prominently in the algorithmic drama: agency, inscrutability, and normativity.

A first theme concerns the varieties of agency attributed to algorithms. While some suggest that we dive deep into the specificities of software and use the language of new media for understanding social life (Manovich 2002), others warn against treating algorithms as “the formal static essence of software” (A. Mackenzie 2006,43)- Wendy Chun, in particular, has criticized an exclusive focus on computer code as “sourcery,” suggesting that “we need to interrogate, rather than venerate or even accept, the grounding or logic of software” (2008, 300). Perhaps unsurprisingly, such invitations have fallen on fruitful grounds in STS, given its long history in challenging conventional notions of agency (cf. Passoth, Peuker, and Schillmeier 2014). Historical analyses of algorithms, for example, have shown how the minimax algorithm underlying much work in computer chess and artificial intelligence has been “embedded in larger sociotechnical systems” (Ensmenger 2012, 25), and how the credit scoring algorithms of Fair, Isaac can best be understood in terms of “material history” (Poon 2007, 300). Sociological studies of high frequency trading have suggested that algorithms can usefully be conceptualized as participants in an “interaction order” (D. A. MacKenzie 2015, 3) and that the operation of bots on Wikipedia cannot be divorced from “the infrastructural conditions that make this kind of regulation possible” (Geiger 2014, 348). Similarly, attention has been directed to the data that feed algorithms (Gitelman 2013), the organization of databases (Stalder and Mayer 2009), and the “codework” of software engineers (Webmoor 2014, 21).

A second theme concerns the widespread assumption of inscrutability.. One way this has been reflected in the literature is a concern with “knowing algorithms” (Seaver 2013). In a recent overview, Rob Kitchin lists six ways of researching algorithms: examining pseudocode/source code, reflexively producing code, reverse engineering, interviewing designers and conducting an ethnography of a coding team, unpacking the full sociotechnical assemblages of algorithms, and examining how algorithms do work in the world (2014, 17-22). However, as research into complex systems like search engines has shown, the question of “knowability” might itself be an interesting part of the puzzle (Ziewitz 2012, 258). Further, as Fabian Muniesa’s study of the Arizona Stock Exchange suggests, algorithms— somewhat counterintuitively—might even be conducive to transparency by requiring the articulation of social theories and models in so-called “trials of explicitness” (2011).

A third theme concerns normativity,, or the ways in which algorithms can be considered to participate in the political, ethical, or accountable. A key challenge here has been how to articulate the normative when straightforward attributions of consequences to “the algorithm” do not stand the test of empirical inquiry. Some scholars have taken issue with this explicitly. In a critique of Kraemer, Overveld, and Peterson’s claim that algorithms comprise “essential value judgments” (2010,251), R. J. Anderson and Sharrock note that the ethical features of algorithms are usually beyond debate and thus responsible for a number of analytic “muddles” (2013, 4). Rather, the “objectivity of the judgements of truthfulness or factuality ... is guaranteed by the observable, reviewable, accountable use of the practices” (2013, 8). Others have explored the more subtle or “sub-political” (Beck 1997) ways in which algorithms interact with cultural values. A good example is Stefan Helmreich’s study of how dominant meanings of nature are being stabilized and refigured in scientists’ work on genetic algorithms (1998).

As already shown by this short and selective review, the clear-cut picture of the algorithmic drama can easily be messed up. At the hands of ethnographers, historians, and sociologists, the formerly stable figure of the algorithm disappears into a range of other concepts and relations, including “sociotechnical systems,” “technology,” “material history,” “order,” or “culture.” This does not exactly make the initial question any easier. What actually is an algorithm? Many diverse things and so there may not be a “they” at all.

Five Takes on Governing Algorithms

It would be easy to read this story as a succession of debunking operations. Careful empirical analyses of algorithms in action expose the narrative of powerful actors as a modem myth. In fact, the algorithmic drama can itself be seen as an ironicizing move that pitches software code as the “real” driver of technologies against earlier studies that focused on human-computer interactions without problematizing “inner workings.” This poses a dilemma. While chiming in on the algorithmic drama would mean taking on a number of assumptions that are hard to maintain in light of carefUl empirical investigation, dismissing the figure altogether risks losing sight of an important feature of contemporary debate. So what then is the point of attending to algorithms at all?

Each of the five contributions to this special issue struggles with this question in its own ways, extending the discussion into new field sites, theories, and methodological sensibilities. Lucas Introna starts from an understanding of algorithms as performative (2016). Asking how the antiplagiarism software Turnitin governs academic writing practice, Introna draws together Neo-Foucauldian govemmentality and feminist science studies to account for the “mutually constitutive nature of problems, domains of knowledge, and subjectivities enacted through governing practices” (2016). Such an “ontology of becoming,” Introna suggests, allows us to appreciate how algorithms—as part of a heterogeneous assemblage— condition the ongoing production of practices and subjects. Daniel Neyland also confronts the algorithmic drama head-on, albeit in a slightly different way. Setting out to “develop an approach to algorithmic accountability that can also take on deflationary sensibilities,” Neyland draws on ideas from ethnomethodology to provide a detailed account of his involvement in the design of an “ethical algorithm” for an airport surveillance system (2016). In doing so, he shows how witnessing the algorithmic system as an emerging order causes different registers of accountability to be mobilized, connecting the everyday accountable with institutional expectations of accountability.

Similarly, Kate Crawford is careful not to fetishize algorithms (2016). Yet, in contrast to Introna and Neyland, her interest is not primarily in govemmentality or accountability but in the emerging logics of algorithmically produced publics. Drawing on political theory, Crawford mobilizes Chantal Mouffe’s notion of agonistic pluralism to expand the view from singularly acting algorithms to the spaces of contestation, in which algorithms are designed, developed, used, and resisted. In his own attempt to develop an “ethics of algorithms,” Mike Ananny (2016) suggests the notion of “networked information algorithms” to describe an “assemblage of institutionally situated computational code, human practices, and normative logics.” Using examples from journalism, commerce, security, and social media, Ananny shows how such assemblages allow us to understand algorithmic ethics as relationally achieved by convening new publics, judging similarity, and organizing time. Yet another approach is offered by Tai Zarsky (2016) who takes a closer look at the complex relationship between algorithms and decision making in the context of credit rankings. Skeptical of widespread claims about credit ranking algorithms as flawed and discriminatory, Zarsky highlights the tensions in these arguments and suggests an alternative set of values around efficiency and fairness. Specifically, he focuses on personal autonomy and its more specific attention to information flows, due process, and the inability of individuals to overcome powerful manipulations.

Overall, the contributions employ a range of strategies to deal with the ambiguity of algorithms as analytic objects. In doing so, they not only explore new empirical settings, including airport security, credit scoring, academic writing, and social media but also find creative ways to make the figure of the algorithm productive for analysis. Whether they attend to the ongoing production of objects and subjects of governance, the accountable witnessing of ethicality, the spaces of contestation opening up in algorithmic systems, or the very values we base our judgments on—the contributions show that mythologies like the algorithmic drama do not have to be reductive but can be rich and complex “stories that help people deal with contradictions in social life that can never fully be resolved” (Mosco 2005, 28; see also Levi-Strauss 1955). This suggests that one key to understanding the politics of algorithms might be not so much to look for essences with consequences but to attend to how thefigure of the algorithm is employed and comes to matter in specific situations. Algorithms, then, can usefully be understood as devices that help enact the problems they account for—as “sensitizing concepts” (Blumer 1969, 148) that attune us to concerns and contradictions without explaining them away.

Conclusion: Recursion without Circularity

As Vincent Mosco wrote about mythologies, “their basic message is not that contradictions are resolvable, rather, that they are scalable” (2005, 28). The recent rise of algorithms in the social sciences has brought about an impressive range of work. Once converted to an object of critical inquiry, however, the figure tends to disappear into a range of other concepts and relations. In light of this multilayered contingency, the goal cannot be to come up with an ultimate recipe, a set of more or less detailed instructions, for studying algorithms—what Clifford Geertz might have called an “ethnographic algorithm” (1973, 11). Rather, the goal should be to keep our inquiries generative enough to invite us to revisit some of our own assumptions and beliefs of what an algorithm actually is. Understood as a sensitizing concept, the figure not only affords us an opportunity to explore—conceptually, empirically, and methodologically—a new and exciting field of technoscientific practice but also allows us to consider what kind of work this new device can do for producing both new answers and new questions. 1n this sense, this special issue does not merely provide a collection of inquiries into problems and settings that can broadly be categorized as “algorithmic.,” but it also offers a demonstration of what it takes to use, deal, and struggle with the figure of the algorithm.

This special issue then takes up Bill Maurer’s call to “shake the black box rather than get overly captivated by its form or announce its arrival with a flourish” (2013, 73). Myth does not always have to be debunked but can be generative in the same way mess can be. The question is rather what we start from at the end of our inquiries. Will the ongoing talk about algorithms further stabilize the already strong “metaphorical armature” (Helmreich 1998, 50) of the algorithmic drama, or will increasing attention to the material, semiotic, and socially contingent character of the figure keep us moving toward new ideas, which may or may not be conceptualized in terms of algorithms? 1t’s hard to tell. 1n the meantime, the conclusion is most likely to begin again and ask in true algorithmic spirit: what actually is an algorithm?

Acknowledgments

Many thanks to Solon Barocas, Nick Seaver, Steve Woolgar, and the contributors to this special issue for helpful comments.

Declaration of Conflicting Interests

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding

The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work has been partially supported by the Intel Science & Technology Center (1STC) for Social Computing and the Information Law Institute at New York University.

Notes

1. The contributions to this special issue were first presented at the “Governing Algorithms” conference, which took place May 16-17, 2013, at New York University. For further information, see http://govemingalgorithms.org.

2. It is interesting to observe how the term “algorithm” has taken the place of concepts like “technology/’ “system,” or “digital media.” This is evident, among other things, in a flurry of newspaper headlines that invoke “the algorithm” as both a problem and a solution.

References

Ananny, Mike. 2016. “Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness.” Science, Technology, & Human Values 41 (1).

Anderson, Chris W. 2011. “Deliberative, Agonistic, and Algorithmic Audiences: Journalism’s Vision of Its Public in an Age of Audience Transparency.” International Journal of Communication 5:529-47.

Anderson, Robert J., and Wesley W. Sharrock. 2013. “Ethical Algorithms: A Brief Comment on an Extensive Muddle.” Accessed August 20, 2015. http://www. sharrockandanderson.co.uk/wp-content/uploads/2013/03/Ethical-Algorithms.pdf.

Beck, Ulrich. 1997. “Subpolitics Ecology and the Disintegration of Institutional Power.” Organization & Environment 10 (1): 52-65.

Beer, David. 2009. “Power through the Algorithm? Participatory Web Cultures and the Technological Unconscious.” New Media & Society 11 (6): 985-1002.

Blumer, H. 1969. Symbolic Interactionism: Perspective and Method. Englewood Cliffs, NJ: Prentice Hall.

Bogost, Ian. 2015. “The Cathedral of Computation.” The Atlantic, January 15. Accessed August 20, 2015. http://www.theatlantic.com/technology/archive/20 15/0 l/the-cathedrail-of-computation/384300/.

Bozdag, Engin. 2013. “Bias in Algorithmic Filtering and Personalization.” Ethics and Information Technology 15 (3): 209-27.

Bucher, Tania. 2012. “Want to Be on the Top? Algorithmic Power and the Threat of Invisibility on Facebook.” New Media & Society 14 (7): 1164-80.

Chun, Wendy Hui Kyong. 2008. “On ‘Sourcery/ or Code as Fetish.” Configurations 16 (3): 299-324, 427.

Collins, Harry M. 1992. Artificial Experts: Social Knowledge and Intelligent Machines. Cambridge, MA: MIT Press.

Crawford, Kate. 2016. “Can an Algorithm Be Agonistic? Ten Scenes from Life in Calculated Publics.” Science, Technology, & Human Values 41 (1).

Daston, Lorraine. 2012. “How Reason Became Rationality.” Accessed August 20, 2015. http://www.mpiwg-berlin.mpg.de/en/research/projects/DeptII_Daston_ Reason.

Davidson, Cathy. 2011. “What Are the 4 R’s Essential to 21st Century Learning? | HASTAC” HASTAC, Ocoter 31, 2(011. Accessed August 20, 2015. fettps/// www.hastac.org/blogs/cathy-davidson/2011/10/31 /what-are-4-rs--e^^tia]l-21st-century-leaming.

This content downloaded from

222.20.33.195 on Mon, 14 Feb 2022 07:35:40 UTC

All use subject to Terms and Conditions of Use - About JSTOR

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值