Who is afraid of black box algorithms

Feature article

'Technology, Policy and Management, Delft University of Technology, Delft, The Netherlands

2Julius Center, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands

Correspondence to

Dr Juan Manuel Duran, Delft University of Technology, Delft 2600, The Netherlands;

j.m.duran@tudelft.nl

Received 19 August 2020 Revised 11 January 2021 Accepted 8 February 2021 Published Online First 18 March 2021

Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI

Juan Manuel Duran © ,1 Karin Rolanda Jongsma ©2

ABSTRACT

The use of black box algorithms in medicine has raised scholarly concerns due to their opaqueness and lack of trustworthiness. Concerns about potential bias, accountability and responsibility, patient autonomy and compromised trust transpire with black box algorithms. These worries connect epistemic concerns with normative issues. In this paper, we outline that black box algorithms are less problematic for epistemic reasons than many scholars seem to believe. By outlining that more transparency in algorithms is not always necessary, and by explaining that computational processes are indeed methodologically opaque to humans, we argue that the reliability of algorithms provides reasons for trusting the outcomes of medical artificial intelligence (AI). To this end, we explain how computational reliabilism, which does not require transparency and supports the reliability of algorithms, justifies the belief that results of medical AI are to be trusted. We also argue that several ethical concerns remain with black box algorithms, even when the results are trustworthy. Having justified knowledge from reliable indicators is, therefore, necessary but not sufficient for normatively justifying physicians to act.

This means that deliberation about the results of reliable algorithms is required to find out what is a desirable action. Thus understood, we argue that such challenges should not dismiss the use of black box algorithms altogether but should inform the way in which these algorithms are designed and implemented. When physicians are trained to acquire the necessary skills and expertise, and collaborate with medical informatics and data scientists, black box algorithms can contribute to improving medical care.

GD Linked

Check for updates

© Author(s) (or their employer(s)) 2021. No commercial re-use. See rights and permissions. Published by BMJ.

To cite: Duran JM, Jongsma KR. J Med Ethics 2021;47:329-335.

BACKGROUND

The use of advanced data analytics, algorithms and artificial intelligence (AI) enables the analysis of complex and large data sets, which can be applied in many fields of society. In medicine, the development of AI has spawned optimism regarding the enablement of personalised care, better prevention, faster detection, more accurate diagnosis and treatment of disease.1 2 Aside from the excitement about new possibilities, this emerging technology is also paired with serious ethical and epistemic challenges.

Algorithms are being developed in several forms, ranging from very simple and transparent structures to sophisticated self-learning forms that continuously test and adapt their own analysis proce-dures.3 4 It is these later types of algorithms that are often referred to as black box algorithms.2 5 At its core, black boxes are algorithms that humans cannot survey, that is, they are epistemically opaque systems that no human or group of humans can closely examine in order to determine its inner states.6 Typically, black box algorithms do not follow well understood rules (as, for instance, a Boolean Decision Rules algorithm does), but can be ‘trained’ with labelled data to recognise patterns or correlations in data, and as such can classify new data. In medicine, such self-learning algorithms can fulfil several roles and purposes: they are used to detect illnesses in image materials such as X-rays,7 they can prioritise information or patient files8 and can provide recommendations for medical decision-making.9 10 The training of such systems is typically done with thousands of data points. Their accuracy, in contrast, is tested against a different set of data points of which the labelling is known (ie, done by humans). Interestingly, even if we claim understanding of the underlying labelling and mathematical principles governing the algorithm, it is still complicated and often even impossible to claim insight into the internal working of such systems. Take for example an algorithm that can accurately detect skin cancer in medical images as well as support the diagnostic accuracy of physicians. Physicians may be able to interpret—and even verify in many cases—the outcome of such algorithms.11-13 But unfortunately, a black box algorithm is opaque, meaning that the physician cannot offer an account of how the algorithm came to its recommendation or diagnosis. This is a challenge for medical practice as it raises thorny epistemological and ethical questions that this article intends to address: Do we have sufficient reasons to trust the diagnosis of opaque algorithms when we cannot entrench how it was obtained? Can physicians be deemed responsible for medical diagnosis based on AI systems that they cannot fathom? How should physicians act on inscrutable diagnoses?

The epistemological opacity that characterises black box algorithms seems to be in conflict with much of the discursive practice of giving and asking for reasons to believe in the results of an algorithm, which are at the basis of ascription of moral responsibility. Concerns relate to problems of accountability and transparency with the use of black box algorithms,14-17 (hidden) discrimination and bias emerging from opaque algorithms,18-21 and the raising of uncertain outcomes that potentially undermine the epistemic authority of experts using black box algorithms.11 22 23 Especially in the field of medicine, scholars have lately shown a preference for arguing that black box algorithms should not be accepted nor trusted as standard practice, principally because they lack features that are essential to

J Med Ethics: first published as 10.1136/medethics-2020-106820 on 18 March 2021. Downloaded from http://jme.bmj.com/ on February 26, 2022 at Lib of Tongji Med Col NSTL Consortia FT. Protected by copyright.

329

good medical practice. Rudin has even gone further to suggest that black box algorithms must be excluded in high-sensitive practices, such as medicine.24

Whereas these moral concerns are genuine, they neglect the epistemological bases that are their conditions of possibility. We propose, instead, that the epistemology of algorithms is prior to, and at the basis of studies on the ethics of algorithms. In other words, we make visible the epistemic basis that—to a certain extent—governs normative claims. However, this epistemic trust does not come with normative justification, and therefore, justified actions cannot be based on this knowledge alone. Our strategy consists in showing first that computational reliabilism (CR) offers the right epistemic conditions for the reliability of black box algorithms and the trustworthiness of results in medical AI; second, we show that prominent ethical questions with regard to decision-making emerge in the context of using black box algorithms in medicine. These questions concern the role of responsibility, professional expertise, patient autonomy and trust.

This article is structured as follows: in section 2, we will clarify underlying notions in the debate about black box algorithms, including notions such as transparency, and methodological and epistemological opacity. Subsequently, we describe CR as a suitable framework for justifying the reliability of an algorithm as well as providing reasonable levels of confidence about the results of opaque algorithms. It is important to clarify that our claim is not that crediting reliability to an algorithm justifies its use in all cases and for all purposes. A reliable algorithm might still negatively influence in different ways expert’s decisions1 4, or forge a less resilient healthcare system by, for instance, outsourcing decision making to algorithms with the corresponding lack of proper training to healthcare personnel. Although these are important implications of reliable algorithms, they fall outside the scope of this paper. In section 3, we describe the ethical concerns that remain in the context of reliable black box algorithms. In the conclusion, we will show how our analysis contributes to a more nuanced and constructive understanding of limitations of opaque algorithms and its implications for clinical practice.

TRANSPARENCY AND OPACITY

If we are unable to entrench reliable knowledge from medical AI, what reasons do physicians have to follow their diagnosis and suggestions of treatment? This is a question about claims of epistemic trust over the AI system and its output.25-27 Answers typically revolve around two core concepts, namely, transparency and opacity. The former refers to algorithmic procedures that make the inner workings of a black box algorithm interpretable to humans. To this end, an interpretable predictor is set out in the form of an exogenous algorithm capable of making visible the variables and relations acting within the black box algorithm and which are responsible for its outcome.28 Opacity, on the other hand, focuses on the inherent impossibility of humans to survey an algorithm, both understood as a script as well as a computer process.6 29

The fundamental difference between transparency and opacity lies in that opacity is about claims on the non-surveyability of algorithms, whereas transparency contends that some relevant degree of surveyability is, indeed, possible." This contrast can be illustrated with a simple example. Consider any given algorithm A. To make A transparent is to have an interpretable predictor with procedures P = {p1, p2... pn}, where any given p. (1 < i < n) describes a sequence of specific relations among variables and functions in A, and where pi entails the results of A. Thus understood, if A is an algorithm for classifying different types of skin cancer, P would realistically include procedures that relate the size, the shape, and the colour of the mole with outputs such as ‘melanomas; squamous cell carcinomas; basal cell carcinomas; nevi; seborrheic keratoses’. Thus understood, transparency is an epistemic manoeuvre intended to offer reasons to believe that certain algorithmic procedures render a reliable output. Furthermore, according to the partisan of transparency, such a belief also entails that the output of the algorithm is interpretable by humans.

Opacity is a different animal altogether. At its core, it is the claim that no human agent (or group of agents) is able to follow the comp

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
### 回答1: "list of synthesis algorithms"的意思是“合成算法列表”,通常指音频合成方面的算法列表。这些算法可以用来生成声音、音乐、语音等。常见的合成算法包括频率调制合成(FM Synthesis)、加法合成(Additive Synthesis)、物理建模合成(Physical Modeling Synthesis)、渐变波表合成(Wavetable Synthesis)等等。 ### 回答2: "list of synthesis algorithms"的意思是合成算法列表。在计算机科学和工程领域,算法是一组有序的、逐步规定的操作步骤,用于解决特定问题或完成特定任务。合成算法是一类特定的算法,用于将给定的输入转换为所需的输出。 合成算法可以应用于各种领域,例如音频合成、图像合成、语音合成等。音频合成算法可以生成各种类型的声音,包括音乐、人声、环境音效等。图像合成算法可以根据给定的参数和规则生成新的图像,如计算机生成的图像、特效等。语音合成算法可以将文字或符号转化为能够听到的声音。 "list of synthesis algorithms"通常指的是列举出不同类型的合成算法,并可能提供相关的详细描述、参数和应用示例。这个列表可以帮助研究人员、开发人员或艺术家选择适合他们需求的合成算法,并了解每种算法的特点和用途。 总之,“list of synthesis algorithms”指的是一个包含各种类型合成算法的列表,用于生成各种类型的输出,包括音频、图像和语音等,以解决特定问题或实现特定任务。 ### 回答3: "List of synthesis algorithms" 意思是“合成算法列表”。在计算机科学和信号处理领域,合成(synthesis)是指通过组合不同的元素或步骤来生成新的实体或数据。因此,合成算法是一系列用于生成特定实体(如音频、图像或文本)的算法的集合。 合成算法的目的是将多个输入元素(如音频信号的频率、幅度和相位)合并起来,生成一个新的输出(如混合音频)。这些算法通常具有特定的数学和计算机操作,以执行合成过程。合成算法可以根据应用程序的需求和目标进行选择和应用。 合成算法可以应用于多个领域,例如音乐合成、语音合成、图像生成和文本生成。在音乐合成中,合成算法可以根据用户的输入和参数自动生成音乐片段。在语音合成中,合成算法可以根据文本输入生成人工语音。在图像生成中,合成算法可以使用输入的特征和参数生成新的图像。在文本生成中,合成算法可以根据给定的文本生成相关的内容。 合成算法的选择取决于所需生成实体的类型和目标。不同的算法可以应用不同的技术,例如加法合成、FM合成、渐进合成等。通过使用合成算法,我们能够以自动化的方式生成新的数据和实体,从而实现各种应用和创意的需要。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值