Machine Learning Explainability for External Stakeholders

Machine Learning Explainability for External Stakeholders

Umang Bhatt12 McKane Andrus1

Adrian Weller2 3 Alice Xiang1

arXiv:2007.05408v1 [cs.CY] 10 Jul 2020

Abstract

As machine learning is increasingly deployed in high-stakes contexts affecting people’s livelihoods, there have been growing calls to “open the black box” and to make machine learning algorithms more explainable. Providing useful explanations requires careful consideration of the needs of stakeholders, including end-users, regulators, and domain experts. Despite this need, little work has been done to facilitate interstakeholder conversation around explainable machine learning. To help address this gap, we conducted a closed-door, day-long workshop between academics, industry experts, legal scholars, and policymakers to develop a shared language around explainability and to understand the current shortcomings of and potential solutions for deploying explainable machine learning in service of transparency goals. We also asked participants to share case studies in deploying explainable machine learning at scale. In this paper, we provide a short summary of various case studies of explainable machine learning, lessons from those studies, and discuss open challenges.

  • 1. Overview

In its current form, explainable machine learning (ML) is not being used in service of transparency for external stakeholders. Much of the ML research claiming to explain how ML models work has yet to be deployed in systems to provide explanations to end users, regulators, or other external stakeholders (Bhatt et al., 2020b). Instead, current techniques for explainability (hereafter used interchangeably with explainable ML) are used by internal stakeholders (i.e., model developers) to debug models (Ribeiro et al., 2016; Lundberg &Lee, 2017). To ensure explainability reaches beyond internal stakeholders in practice, the ML commu-

Partnership on AI, San Francisco, United States 2Department of Engineering, University of Cambridge, Cambridge, United Kingdom 3The Alan Turing Institute, London, United Kingdom. Correspondence to: Umang Bhatt <usb20@cam.ac.uk>.

Copyright 2020 by the author(s).

nity should account for how and when external stakeholders want explanations. As such, the authors of this paper worked with the Partnership on AI (a multi-stakeholder research organization with partners spanning major technology companies, civil society organizations, and academic institutions) to bring together academic researchers, policymakers, and industry experts at a day-long workshop to discuss challenges and potential solutions for deploying explainable ML at scale for external stakeholders.

  • 1.1. Demographics and Methods

33 participants from five countries, along with seven trained facilitators to moderate the discussion, attended this workshop. Of the 33 participants, 15 had ML development roles, 3 were designers, 6 were legal experts, and 9 were policymakers. 15 participants came from for-profit corporations, 12 came from non-profits, and 6 came from academia. First, participants were clustered into 5- or 6-person groups, with representation from different expertise in each group, wherein they discussed their respective disciplines’ notions of explainability and attempted to align on common definitions. Second, participants were separated into domain-specific groups, each with a combination of domain experts and generalists, to discuss (i) use cases for, (ii) stakeholders of, (iii) challenges with, and (iv) solutions regarding explainable ML. The domains discussed were finance (e.g., employee monitoring for fraud prevention, mortgage lending), healthcare (e.g., diagnostics, mortality prediction), media (e.g., misinformation detection, targeted advertising) and social services (e.g., housing approval, government resource allocation).

  • 1.2. Definitions

“Explainability” is ill-defined (Lipton, 2018); as such, in the first part of the workshop, the interdisciplinary groups were asked to come to a consensus definition of explainability. Below are some definitions provided by participants.

  • • Explainability gives stakeholders a summarized sense of how a model works to verify if the model satisfies its intended purpose.

  • • Explainability is for a particular stakeholder in a specific context with a chosen goal, and aims to get a stakeholder’s mental model closer to a models behavior while fulfilling a stakeholders explanatory needs.

• Explainability lets humans interact with ML models to make better decisions than either could alone.

All definitions of explainability included notions of context (the scenario in which the model is deployed), stakeholders (those affected by the model and those with a vested interest in the model’s explanatory nature), interaction (the goal the model and its explanation serve), and summary (the notion that an explanation should compress the model into digestible chunks). Therefore, explainability loosely refers to tools that empower a stakeholder to understand and, when necessary, contest the reasoning of model outcomes.

One policymaker noted that “the technical communitys definition of explainable ML [is] unsettling,” since explainable ML solely focuses on exposing model innards to stakeholders without a clear objective. Explainable ML does not consider the broader context in which the model is deployed. For a given context, the ML community’s treatment of explainability fails to capture what is being explained, to whom, and for what reason? One academic suggested that “intelligibility could capture more than explainability;” encapsulating explainability, interpretability, and understandability, intelligibility captures all that people can know or infer about ML models (Zhou & Danks, 2020).

In the subsequent two sections, we discuss emergent themes of the domain-specific portion of the workshop. In Section 2, we discuss the need for broader community engagement in explainable ML development. In Section 3, we outline elements of deploying explainable ML at scale.

  • 2. Designing Explainability

The first salient theme noted by participants was the lack of community engagement in the explainable ML development process. Community engagement entails understanding the context of explainable ML deployment, evaluation of explainable ML techniques, involvement of affected groups in development, and education of various stakeholders regarding explainability use and misuse.

  • 2.1. Context of Explanations

Given the context of the deployed model, an explanation helps stakeholders interpret model outcomes based on additional information provided (e.g., understanding how the model behaves, validating the predictability of the model’s output, or confirming if the model’s “reasoning aligns with the stakeholder’s mental model”) (Ruben, 2015).

Each stakeholder may require a different type of transparency into the model. Expanding the ML community’s understanding of the needs of specific stakeholder types will allow for model explanations to be personalized. The notion of a good explanation varies by stakeholder and their relevant needs (Arya et al., 2019; Miller, 2019).

To further probe these contexts and understand what stakeholders actually need from explanations, many participants pointed to the need for explainable ML to incorporate expertise from other disciplines. Introducing researchers from human-computer interaction and user experience research as well as bringing in community experts were seen as ways to enable participatory development and to ensure the applicability of explainable ML methods.

Another dimension of context that participants noted is that ML systems represent a chain of models, data, and human decisions (Lawrence, 2019), or, in other words, a distinctly sociotechnical system (See (Selbst et al., 2019) for a summary of common issues faced with sociotechnical systems). An organization that has many models in production will require different levels and styles of transparency for each stakeholder to operate cohesively. At times, these transparency requirements can be just a matter of disclosure of the process. Though, making that infor

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值