android 揭示动画_可解释的人工智能揭示了正在学习的机器

本文翻译自《Explainable Artificial Intelligence: Unveiling What Machines Are Learning》,主要探讨如何在Android中理解并展示人工智能模型的学习过程,通过揭示动画帮助我们更好地理解AI的工作原理。" 113495319,10537241,MySQL数据库基础与实践_张素青教程解析,"['MySQL数据库', '数据库设计', '数据库操作', '数据查询', '存储过程']
摘要由CSDN通过智能技术生成

android 揭示动画

解释或灭亡! (Explain or perish!)

Every day, each person generates an enormous quantity and diversity of data, which, allied to an increase of the available computational power and to the democratisation of the access to the data itself, created the ideal conditions for the generalised use of machine learning models applied to several contexts and areas (e.g., law, finance, healthcare, commerce, politics, biometrics and multimedia). Moreover, while the first generation of machine learning algorithms required some prior knowledge on data process and feature engineering and was used to solve easy tasks, the current trend focuses more on solving harder tasks, using multi data sources and extraordinarily complex models such as deep learning (Figure 1) or model ensembles. Interestingly, this newer generation of machine learning is the one that is achieving human-level performances in various areas (e.g., cancer detection in biomedical imaging). Although this may seem promising, it is now important to address the limitations of these outperforming models, to have a better sense of the big picture.

每天,每个人都会生成大量和多样化的数据 ,这与可用计算能力的提高以及对数据本身的访问的民主化有关,为普遍使用的机器学习模型创造了理想条件涉及多个领域和领域(例如法律,金融,医疗保健,商业,政治,生物识别技术和多媒体)。 此外,虽然第一代机器学习算法需要有关数据处理和功能工程的一些先验知识,并被用来解决简单的任务,但当前的趋势更多地集中在使用更复杂的模型(如深度学习)来解决更困难的任务上(图1)或模型集合。 有趣的是,这一新一代的机器学习正在各个领域实现人类水平的性能(例如,生物医学成像中的癌症检测)。 尽管这看起来似乎很有希望,但现在重要的是要解决这些表现出色的模型的局限性,以便更好地了解全局。

Perhaps the most pertinent limitation of these complex models is the fact that they work as black-boxes (i.e., their internal logic is hidden to the user) that receive data and output results, without properly explaining their decisions in a human-comprehensible way. Therefore, for certain highly-regulated areas (e.g., healthcare, law or finance), the outcome is that we end up training highperforming models that will never be used in a real-world context since this lack of transparency leads to a deficit of trust which may jeopardise the acceptance of such technologies by the area-experts (Figure 2). Besides, there is an inherent tension between machine learning performance (predictive accuracy) and explainability: usually, the best-performing methods (e.g., deep learning) are the least transparent, and the ones providing a clear explanation (e.g., decision trees) are less accurate.

这些复杂模型的最相关的限制也许是这样的事实,即它们就像黑匣子一样工作(即,其内部逻辑对用户而言是隐藏的),无法接收到数据并输出结果,而没有以易于理解的方式正确解释其决策。 因此,对于某些高度管制的领域(例如,医疗保健,法律或金融),结果是我们最终训练了高性能模型,而这些模型在现实世界中将永远不会使用,因为这种缺乏透明度会导致缺乏信任这可能会危害区域专家对此类技术的接受(图2)。 此外,机器学习性能(预测准确性)和可解释性之间存在内在的张力:通常,性能最佳的方法(例如深度学习)透明度最低,而提供清晰解释的方法(例如决策树)则是透明的。不太准确。

Regulation is also pointing the debate in this direction. For instance, if we look to the EU-GDPR, we end up concluding that transparency, for the sake of ethics and fairness, is one of the priorities. An interesting example is the EU-GDPR’s Article 22 which states that individuals “have the right not to be subject to a decision based solely on automated processing”. This statement has interesting outcomes: on one hand, it states that the users of such technologies have the right to know what are the mechanisms behind the functioning of such technologies and, on the other hand, they have the right to not be subject to the decisions performed by these algorithms [2].

监管也将辩论指向这个方向。 例如,如果我们着眼于EU-GDPR ,我们最终得出结论,出于道德和公平的考虑,透明是优先事项之一。 一个有趣的例子是EU-GDPR的第22条,该条规定个人“有权不受完全基于自动化处理的决定的约束”。 该声明产生了有趣的结果:一方面,它声明此类技术的用户有权了解此类技术的功能背后的机制是什么,另一方面,他们有权不受此类技术的约束。这些算法执行的决策[2]。

Image for post
Prosymbols, ProsymbolsSmashicons, SmashiconsFreepik and FreepikBecris. Becris。

At first sight, this may seem a complicated list of problems to be addressed by the research and technology community. However, we should see this as an opportunity to develop more transparent algorithms, capable of being understood by the area-experts and, the final goal, to be implemented in a real-world context.

乍看起来,这似乎是研究和技术界需要解决的复杂问题清单。 但是,我们应该将其视为开发更加透明的算法的机会,以使区域专家能够理解这些算法,并且最终目标可以在现实环境中实现。

拆箱黑匣子 (Unboxing the black box)

There are several ways to study interpretability in machine learning: working with models which are interpretable by design (inherently interpretable) or using post hoc (and model agnostic) interpretation methods. Depending on the purpose of the study or the field of application, both pathways are reasonable. Besides, it is also important to understand if we are dealing with structured (e.g., tables) or unstructured (e.g., images) data to choose the best interpretability approach [3].

有几种方法可以研究机器学习中的可解释性:使用可以通过设计来解释的模型(固有地可以解释)或使用事后(和模型不可知的)解释方法。 根据研究目的或应用领域,这两种途径都是合理的。 此外,了解我们是在处理结构化(例如表格)还是非结构化(例如图像)数据以选择最佳的可解释性方法[3]也很重要。

For instance, let’s assume that we are dealing with structured data (e.g., a set of features and their correspondent labels). Probably, the easiest way to achieve interpretability is to use algorithms that will generate interpretable models such as linear regression, logistic regression, decision trees or decision rules. This type of algorithms usually respect important properties such as linearity (the association between features and labels is modelled linearly), monotonicity (the relation between a feature and their label always goes in the same direction over the entire range of the feature) or feature-interactions (e.g., multiplying two features to generate a new one). On the other hand, by preference or by necessity, you can fit your data to complex models and then use an external (agnostic) interpretation method to understand what is being learned by the model. The great advantage of this approach is flexibility since you can detach yourself from the type of model and focus on the performance itself [3]. A worth-reading set of post hoc methods are partial dependent plot (PDP) [4], individual conditional expectation (ICE) plot [5], permutation feature importance [6], local interpretable model-agnostic explanations (LIME) [7] or Shapley values [8]. Figure 3 presents an example for LIME algorithm.

例如,假设我们正在处理结构化数据(例如,一组要素及其对应的标签)。 实现可解释性的最简单方法可能是使用将生成可解释模型的算法,例如线性回归,逻辑回归,决策树或决策规则。 这种类型的算法通常会考虑重要的属性,例如线性度(特征和标签之间的关联是线性建模的),单调性(特征与其标签之间的关系在整个特征范围内始终朝相同的方向)或特征-互动(例如,将两个特征相乘以生成一个新特征)。 另一方面,根据偏好或必要性,您可以将数据拟合到复杂的模型,然后使用外部(不可知论)解释方法来了解模型正在学习的内容。 这种方法的最大优势是灵活性,因为您可以将自己从模型的类型中分离出来,并专注于性能本身[3]。 值得一读的事后方法集有:部分依赖图(PDP)[4],个人条件期望(ICE)图[5],置换特征重要性[6],局部可解释模型不可知性解释(LIME)[7]。或Shapley值[8]。 图3给出了LIME算法的示例。

Image for post
Figure 3: LIME algorithm for tabular data. A) Random forest predictions given features x1 and x2. Predicted classes: 1 (dark) or 0 (light). B) Instance of interest (big dot) and data sampled from a normal distribution (small dots). C) Assign higher weight to points near the instance of interest. D) Signs of the grid show the classifications of the locally learned model from the weighted samples. The white line marks the decision boundary (P(class=1) = 0.5). Image and description from Christoph Molnar [3]
图3:用于表格数据的LIME算法。 A)给定特征x1和x2的随机森林预测。 预测的类别:1(深色)或0(浅色)。 B)感兴趣的实例(大点)和从正态分布采样的数据(小点)。 C)给感兴趣实例附近的点分配更高的权重。 D)网格的符号显示从加权样本中本地学习的模型的分类。 白线标记了决策边界(P(class = 1)= 0.5)。 Christoph Molnar的图片和描述[3]

Now assume that we are dealing with image data and a deep convolutional neural network (CNN) to perform image classification. In this case, we aim to understand not only the type of features that are being learned by the CNN but also the pixels which contribute more to the final decision. Once again, generally, for this type of computer vision applications, we use post hoc methods to deconstruct the black-box. Current state-of-the-art methods use diversified approaches to obtain explanations: SmoothGrad generates gradient-based sensitivity maps by adding noise to the input [9]; Class Activation Mapping (CAM) [10] and Gradient-weighted Class Activation Mapping (Grad-CAM) [11] methods map the predicted class score back to the previous convolutional layer to produce the class activation maps; deconvolutional network (DeConvNet) [12], guided backpropagation (Guided BackProp) [13] (Figure 4) and PatternNet [14] try to map all the activations of network back into the pixel input space to discover which patterns originally caused a given activation in the feature maps; and Layer-Wise Relevance Propagation (LRP) [15], Deep Taylor Decomposition (DTD) [16] (Figure 5), Deep Learning Important Features (DeepLIFT) [17], Integrated Gradients [18] and PatternAttribution [14] aim to evaluate how signal dimensions contribute to the output through the layers.

现在假设我们正在处理图像数据和深度卷积神经网络(CNN)以执行图像分类。 在这种情况下,我们不仅要了解CNN正在学习的特征类型,还要了解对最终决策有更大贡献的像素。 再次,通常,对于这种类型的计算机视觉应用程序,我们使用事后方法来解构黑匣子。 当前的最新方法使用多种方法来获得解释:SmoothGrad通过将噪声添加到输入中来生成基于梯度的灵敏度图[9]; 类激活映射(CAM)[10]和梯度加权类激活映射(Grad-CAM)[11]方法将预测的类分数映射回先前的卷积层,以生成类激活图。 反卷积网络(DeConvNet)[12],引导反向传播(Guided BackProp)[13](图4)和PatternNet [14]试图将网络的所有激活映射回像素输入空间,以发现最初导致给定激活的模式在特征图中; 以及层明智相关性传播(LRP)[15],深度泰勒分解(DTD)[16](图5),深度学习重要功能(DeepLIFT)[17],集成梯度[18]和PatternAttribution [14]旨在评估信号尺寸如何有助于通过各层的输出。

Image for post
Figure 4: Guided BackProp algorithm, from Springenberg et al. [13].
图4:Springenberg等人的指导性BackProp算法。 [13]。

Following the current trend in this topic, there are already off-the-shelf packages and frameworks that can be used by less experienced practitioners. Some interesting ones are lime, Shap, iNNvestigate, keras-vis and Captum.

遵循该主题的当前趋势,已经有一些现成的软件包和框架可供经验不足的从业者使用。 一些有趣的是石灰ShapiNNvestigatekeras-vis Captum

Image for post
Figure 5: DTD algorithm, from Bach et al. [16].
图5:DTD算法,来自Bach等。 [16]。

元可解释性 (Meta-Explainability)

From the last section, one can infer that there is no shortage of interpretability methods available in the literature. Nonetheless, there are some problems associated with most of them, as the explanations they provide are often difficult for a human to understand, jeopardising their purpose of existence. This exact gap between the current interpretability methods and the explanation sciences (that describe the way humans explain and understand) is described in a recent paper by Mittelstadt et al. [19]. As pointed out by the authors of the paper, the available interpretability methods are more focused on scientific modelling than explanation giving. Therefore, we need a different approach to interpretability or of meta-explainability, i.e., explaining the explanations provided by interpretability or explainable artificial intelligence (xAI) methods.

从最后一部分可以推断,文献中不乏可解释性的方法。 尽管如此,大多数问题还是存在一些问题,因为它们所提供的解释常常使人难以理解,从而危及其生存的目的。 Mittelstadt等人在最近的一篇论文中描述了当前可解释性方法与解释科学(描述人类解释和理解方式)之间的确切差距。 [19]。 正如本文作者所指出的那样,可用的可解释性方法更侧重于科学建模,而不是给出解释。 因此,我们需要一种可解释性或元解释性的不同方法,即解释由可解释性或可解释的人工智能(xAI)方法提供的解释。

A possible way we consider to overcome the need for meta-explainability is to provide different types of explanations for the same case, maximising the probability of having at least one explanation that is understandable to the explanation consumer. For example, consider a given Chest X-ray, and a certain machine learning model. The machine learning model analyses the X-ray and classifies it according to the disease detected. For a radiologist, it would probably make sense to explain the decision of the model by providing an interpretability saliency map, highlighting the regions of the image that were relevant for the decision. On the other hand, for a general clinician, it would probably make more sense to explain it by providing a textual description of what is happening in the image (since they are not as familiarised with the images as the radiologists are). If our system provided both explanations, both explanation consumers would be satisfied, and our problem would be solved.

我们考虑克服对元解释的需求的一种可能方法是为同一情况提供不同类型的解释,从而最大程度地增加至少一个解释消费者可以理解的解释的可能性。 例如,考虑给定的胸部X射线和特定的机器学习模型。 机器学习模型分析X射线并根据检测到的疾病对其进行分类。 对于放射科医生来说,通过提供可解释的显着性图,突出显示与该决定相关的图像区域来解释模型的决定可能是有意义的。 另一方面,对于普通临床医生而言,通过提供图像中正在发生的情况的文本描述来解释它可能会更有意义(因为它们不像放射科医生那样熟悉图像)。 如果我们的系统提供两个解释,则两个解释消费者都将满意,并且我们的问题将得到解决。

Image for post
Pexels) Pexels )

Another possible approach is to completely rethink the way interpretability methods provide the explanations, approximating it from the way humans think and behave. An idea explored in the paper by Mittelstadt et al. [19] is building the explanations in a contrastive way. Therefore, instead of explaining the features that led to the decision, it may make sense to explain the decision based on what made the case different from “normal”. Moreover, the explanations should be adapted to the subjective interests of the explanation consumer and should be interactive, allowing their iterative improvement.

另一种可能的方法是完全重新考虑可解释性方法提供解释的方式,使其与人类的思维和行为方式相近。 Mittelstadt等人在论文中探讨了一个想法。 [19]正在以一种对比的方式来建立解释。 因此,与其解释导致该决定的特征,不如根据使案件与“正常”不同的原因来解释该决定。 此外,说明应适应说明消费者的主观利益,并且应该是交互式的,以允许其反复改进。

In summary, a lot of work has been done in the last years to develop interpretability methods able to explain the decisions of machine learning models. Nonetheless, most of the work was developed from a pure computer science perspective, not taking into account all the insights from the fields of medicine, psychology, philosophy or sociology, and therefore, falling short in terms of understandability for humans.

总而言之,在过去的几年中,已经做了很多工作来开发能够解释机器学习模型决策的可解释性方法。 但是,大多数工作是从纯计算机科学的角度进行的,没有考虑到医学,心理学,哲学或社会学领域的所有见解,因此在人类的可理解性方面还不够。

INESC TEC在xAI上的工作 (INESC TEC work on xAI)

Since 2017, our group at INESC TEC is actively working in xAI. We have been addressing several relevant topics related to the interpretability domain, namely:

自2017年以来,我们在INESC TEC的团队一直在xAI中积极工作。 我们一直在解决与可解释性领域相关的几个相关主题,即:

· The generation of explanations to satisfy all types of consumers;

·产生满足所有类型消费者的解释;

· Analysing what type of features state-of-the-art anti-spoofing models focus when classifying images into bona-fine (real) or attack;

·在将图像分类为真实(真实)或攻击时,分析最新防溅模型关注的特征类型;

· The use of interpretability as an intermediate step to improve the existing machine learning models.

·将可解释性用作改善现有机器学习模型的中间步骤。

The generation of explanations to satisfy all types of consumers has received a lot of our attention. As previously pointed out, different people have different preferences and domain knowledge, and that also translates to the kind of explanations that they would like to receive.

满足所有类型消费者的解释的产生已经引起了我们的极大关注。 如前所述,不同的人具有不同的偏好和领域知识,这也转化为他们希望获得的解释。

In the past, we have proposed two machine learning models (one deep neural network, and one ensemble of models) [20, 21] that embedded in their architectures properties that increased their inherent interpretability (e.g., monotonicity, sparsity), and from which we were able to extract different types of explanations, or as we called it, complementary explanations. These complementary explanations consist of rule-based textual explanations and case-based visual explanations. Both types of explanations are provided for the same sample, which means that our approach increases the chance of satisfying all kinds of explanation consumers.

过去,我们提出了两种机器学习模型(一个深度神经网络和一个模型集成)[20、21],它们嵌入了其体系结构中,从而提高了其固有的可解释性(例如单调性,稀疏性),并且从中我们能够提取不同类型的解释,或者我们称之为互补的解释。 这些补充说明包括基于规则的文本说明和基于案例的视觉说明。 两种解释都是针对同一样本提供的,这意味着我们的方法增加了满足各种解释消费者的机会。

The second topic we have been focusing our efforts is the understanding of anti-spoofing machine learning models. Part of our team has worked in forensics and biometrics for years, trying to improve the machine learning models used in these fields. However, an interpretability study to understand what the machine learning models really learn remained largely undone. In a very recent work, we explored interpretability techniques to further assess the quality of the machine learning models used for anti-spoofing. With that goal in mind, we defined several properties we considered ideal for an anti-spoofing model, and we verified their fulfilment by performing a thorough experimental work. For instance, one of these ideal properties consists of having similar explanations for the sample regardless of being present in the train or only in the test (Figure 8). By obeying this rule, the model demonstrates to be robust and coherent.

我们一直关注的第二个主题是对反溢出机器学习模型的理解。 我们团队的一部分已经在法医和生物识别领域工作了多年,试图改进这些领域中使用的机器学习模型。 但是,关于理解机器学习模型实际学习内容的可解释性研究在很大程度上仍未完成。 在最近的工作中,我们探索了可解释性技术,以进一步评估用于防喷溅的机器学习模型的质量。 考虑到这一目标,我们定义了一些我们认为是防喷淋模型的理想性能,并通过进行全面的实验工作来验证其性能。 例如,这些理想特性之一包括对样本的相似解释,而不管其是否存在于火车中或仅存在于测试中(图8)。 通过遵守该规则,该模型证明是健壮和连贯的。

Image for post
Figure 7: Example of a test image and the related complementary explanations. For each test image, there is an automatic decision associated, and two types of explanations, one rule-based and one case-based. Figure from Silva et al. [20].
图7:测试图像示例和相关的补充说明。 对于每个测试图像,都有一个自动决策关联,并且有两种类型的解释,一种基于规则,一种基于案例。 Silva等人的图。 [20]。
Image for post
Figure 8: Example of explanations for the same sample when it is in train or in test. Figure from Sequeira et al. [22].
图8:同一样本在火车或测试中的解释示例。 图来自Sequeira等。 [22]。

Finally, we are also investigating ways of leveraging interpretability to improve machine learning models. One of our most recent works tackles precisely that. We took advantage of the fact that the interpretability saliency maps of a well-trained model help us localise the regions of interest of an image, and fine-tuned an existing classification model with these same saliency maps to improve its performance as a medical image retrieval system. By doing that, we were able to improve the retrieval process considerably, producing results very much in line with the ones obtained by an experienced radiologist [23].

最后,我们还研究了利用可解释性来改善机器学习模型的方法。 我们的最新作品之一正是解决了这一问题。 我们利用以下事实:训练有素的模型的可解释性显着性图有助于我们定位图像的感兴趣区域,并使用这些相同的显着性图微调现有分类模型,以提高其作为医学图像检索的性能。系统。 通过这样做,我们能够显着改善检索过程,产生的结果与经验丰富的放射科医生的结果非常吻合[23]。

Most of the interpretability work developed at INESC TEC and mentioned in this article was performed under the umbrella of CLARE (Computer-Aided Cervical Cancer Screening), which is an FCT financed project in which we are working in collaboration with Fraunhofer Portugal. Moreover, since April we are also involved in TAMI (Transparent Artificial Medical Intelligence), where we are collaborating with the company First Solutions, Fraunhofer Portugal, ARS–Norte and Carnegie Mellon University to improve the transparency of algorithms in medical decision support systems.

由INESC TEC开发并在本文中提及的大多数可解释性工作都是在CLARE(计算机辅助宫颈癌筛查)的保护下进行的,CLARE是一项FCT资助的项目,我们正在其中与葡萄牙Fraunhofer合作。 此外,自4月以来,我们还参与了TAMI(透明人工医学智能)研究,我们与First Solutions,Fraunhofer Portugal,ARS-Norte和Carnegie Mellon University公司合作,以提高医疗决策支持系统中算法的透明度。

If you became interested in this topic, we invite you to read this survey on machine learning interpretability developed by our research group [24] and, for the next months and years, stay tuned! We will keep devoting our efforts to the xAI field and will do our best to tackle the limitations of today’s interpretability and machine learning models.

如果您对此主题感兴趣,我们邀请您阅读由我们的研究小组[24]开发的关于机器学习可解释性的调查,在接下来的几个月和几年中,请继续关注! 我们将继续致力于xAI领域的研究,并将竭尽全力解决当今可解释性和机器学习模型的局限性。

作者简介 (Short bio of authors)

This article was written by INESC TEC researchers Wilson Silva and Tiago Gonçalves in August 2020.

本文由INESC TEC研究人员Wilson Silva和TiagoGonçalves于2020年8月撰写。

Wilson Silva is a PhD Candidate in Electrical and Computer Engineering at Faculdade de Engenharia da Universidade do Porto (FEUP) and a research assistant at INESC TEC, where he is associated with the Visual Computing & Machine Intelligence (VCMI) and Breast research groups. He holds an integrated master (BSc + MSc) degree in Electrical and Computer Engineering obtained from FEUP in 2016. His main research interests include Machine Learning and Computer Vision, with a particular focus on Explainable Artificial Intelligence and Medical Image Analysis.

威尔逊席尔瓦 ( Wilson Silva)是波多黎各大学(FAUP)电气与计算机工程博士学位(FEUP)和INESC TEC的研究助理,他与视觉计算和机器智能(VCMI)和乳腺癌研究小组有联系。 他拥有2016年从FEUP获得的电气和计算机工程硕士学位(BSc + MSc)。他的主要研究兴趣包括机器学习和计算机视觉,特别是可解释的人工智能和医学图像分析。

Tiago Gonçalves received his MSc in Bioengineering (Biomedical Engineering) from Faculdade de Engenharia da Universidade do Porto (FEUP) in 2019. Currently, he is a PhD Candidate in Electrical and Computer Engineering at FEUP and a research assistant at the Centre for Telecommunications and Multimedia of INESC TEC with the Visual Computing & Machine Intelligence (VCMI) Research Group. His research interests include machine learning, explainable artificial intelligence (in-model approaches), computer vision, medical decision support systems, and machine learning deployment.

蒂亚戈贡萨尔维斯从Faculdade德ENGENHARIA达UNIVERSIDADE收到他的硕士生物工程(生物医学工程)做好2019年目前波尔图(FEUP),他是电子和计算机工程博士学位候选人在FEUP,并在中心的电讯及多媒体研究助理视觉计算和机器智能(VCMI)研究小组的INESC TEC的代表。 他的研究兴趣包括机器学习,可解释的人工智能(模型化方法),计算机视觉,医疗决策支持系统和机器学习部署。

[1] S. M. McKinney, M. Sieniek, V. Godbole, J. Godwin, N. Antropova, H. Ashrafian, T. Back, M. Chesus, G. C. Corrado, A. Darzi, et al., “International evaluation of an ai system for breast cancer screening,” Nature, vol. 577, no. 7788, pp. 89–94, 2020.

[1] SM McKinney,M。Sieniek,V。Godbole,J。Godwin,N。Antropova,H。Ashrafinan,T。Back,M。Chesus,GC Corrado,A。Darzi等,“ ai乳腺癌筛查系统”,《自然》,第 577号 7788,第89–94页,2020年。

[2] M. E. Kaminski, “The right to explanation, explained,” Berkeley Tech. LJ, vol. 34, p. 189, 2019.

[2]卡明斯基(ME Kaminski),“解释权,解释了”,伯克利理工学院。 LJ,第一卷 第34页 189,2019。

[3] C. Molnar, Interpretable Machine Learning. 2019. https://christophm. github.io/interpretable-ml-book/.

[3] C. Molnar,可解释性机器学习。 2019.https :// christophm。 github.io/interpretable-ml-book/。

[4] J. H. Friedman, “Greedy function approximation: a gradient boosting machine,” Annals of statistics, pp. 1189–1232, 2001.

[4] JH Friedman,“贪婪函数逼近:梯度提升机”,《统计年鉴》,第1189–1232页,2001年。

[5] A. Goldstein, A. Kapelner, J. Bleich, and E. Pitkin, “Peeking inside the black box: Visualizing statistical learning with plots of individual conditional expectation,” Journal of Computational and Graphical Statistics, vol. 24, no. 1, pp. 44–65, 2015.

[5] A. Goldstein,A。Kapelner,J。Bleich和E. Pitkin,“在黑匣子中偷看:使用个别条件期望的图表可视化统计学习”,《计算与图形统计杂志》,第1卷。 24号 1,第44–65页,2015年。

[6] L. Breiman, “Random forests,” Machine learning, vol. 45, no. 1, pp. 5–32, 2001.

[6] L. Breiman,“随机森林”,机器学习,第一卷。 45号 1,第5–32页,2001年。

[7] M. T. Ribeiro, S. Singh, and C. Guestrin, “” why should i trust you?” explaining the predictions of any classifier,” in Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1135–1144, 2016.

[7] MT Ribeiro,S。Singh和C. Guestrin,“”我为什么要相信你? 解释任何分类器的预测”,《 2016年第22届ACM SIGKDD知识发现和数据挖掘国际会议论文集》,第1135至1144页。

[8] L. S. Shapley, “A value for n-person games,” Contributions to the Theory of Games, vol. 2, no. 28, pp. 307–317, 1953.

[8] LS Shapley,“ n人游戏的价值”,对游戏理论的贡献,第一卷。 2,没有 28,第307–317页,1953年。

[9] D. Smilkov, N. Thorat, B. Kim, F. Vi´egas, and M. Wattenberg, “Smoothgrad: removing noise by adding noise,” arXiv preprint arXiv:1706.03825, 2017.

[9] D. Smilkov,N。Thorat,B。Kim,F。Vi´egas和M. Wattenberg,“平滑的梯度:通过添加噪声消除噪声”,arXiv预印本arXiv:1706.03825,2017年。

[10] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, “Learning deep features for discriminative localization,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2921– 2929, 2016.

[10] B. Zhou,A。Khosla,A。Lapedriza,A。Oliva和A. Torralba,“学习判别本地化的深层功能”,在IEEE关于计算机视觉和模式识别的会议论文集中,第2921页– 2929年,2016年。

[11] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations from deep networks via gradientbased localization,” in Proceedings of the IEEE international conference on computer vision, pp. 618–626, 2017.

[11] RR Selvaraju,M。Cogswell,A。Das,R。Vedantam,D。Parikh和D. Batra,“ Grad-cam:基于梯度的本地化对深层网络的视觉解释,” IEEE国际会议论文集计算机视觉,第618–626页,2017年。

[12] M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in European conference on computer vision, pp. 818–833, Springer, 2014.

[12] MD Zeiler和R. Fergus,“可视化和理解卷积网络”,在欧洲计算机视觉会议上,第818–833页,施普林格,2014年。

[13] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, “Striving for simplicity: The all convolutional net,” arXiv preprint arXiv:1412.6806, 2014.

[13] JT Springenberg,A。Dosovitskiy,T。Brox和M. Riedmiller,“为简单而奋斗:全卷积网络”,arXiv预印本arXiv:1412.6806,2014年。

[14] P.-J. Kindermans, K. T. Schu¨tt, M. Alber, K.-R. Mu¨ller, D. Erhan, B. Kim, and S. D¨ahne, “Learning how to explain neural networks: Patternnet and patternattribution,” arXiv preprint arXiv:1705.05598, 2017.

[14] P.-J. Kindermans,KT Schutt,M。Alber,K.-R。 Muller,D. Erhan,B. Kim和S.Dâahne,“学习如何解释神经网络:Patternnet和模式归因”,arXiv预印本arXiv:1705.05598,2017年。

[15] G. Montavon, S. Lapuschkin, A. Binder, W. Samek, and K.-R. Mu¨ller, “Explaining nonlinear classification decisions with deep taylor decomposition,” Pattern Recognition, vol. 65, pp. 211–222, 2017.

[15] G. Montavon,S。Lapuschkin,A。Binder,W。Samek和K.-R.。 穆勒,“用深度泰勒分解解释非线性分类决策”,模式识别,第一卷。 65,第211–222页,2017年。

[16] S. Bach, A. Binder, G. Montavon, F. Klauschen, K.-R. Mu¨ller, and W. Samek, “On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation,” PloS one, vol. 10, no. 7, p. e0130140, 2015.

[16] S. Bach,A。Binder,G。Montavon,F。Klauschen,K.-R.。 Muller和W. Samek,“关于逐层相关性传播对非线性分类器决策的像素解释”,PloS one,第1卷。 10号 7,第 e0130140,2015年。

[17] A. Shrikumar, P. Greenside, and A. Kundaje, “Learning important features through propagating activation differences,” arXiv preprint arXiv:1704.02685, 2017.

[17] A. Shrikumar,P。Greenside和A.Kundaje,“通过传播激活差异来学习重要特征”,arXiv预印本arXiv:1704.02685,2017年。

[18] M. Sundararajan, A. Taly, and Q. Yan, “Axiomatic attribution for deep networks,” arXiv preprint arXiv:1703.01365, 2017.

[18] M. Sundararajan,A。Taly和Q. Yan,“深层网络的公理归因”,arXiv预印本arXiv:1703.01365,2017年。

[19] B. Mittelstadt, C. Russell, and S. Wachter, “Explaining explanations in ai,” in Proceedings of the conference on fairness, accountability, and transparency, pp. 279–288, 2019.

[19] B. Mittelstadt,C。Russell和S. Wachter,“在人工智能中解释解释”,关于公平,问责制和透明度的会议论文集,第279-288页,2019年。

[20] W. Silva, K. Fernandes, M. J. Cardoso, and J. S. Cardoso, “Towards complementary explanations using deep neural networks,” in Understanding and Interpreting Machine Learning in Medical Image Computing Applications, pp. 133–140, Springer, 2018.

[20] W. Silva,K。Fernandes,MJ Cardoso和JS Cardoso,“迈向使用深度神经网络的互补性解释”,《理解和解释医学图像计算应用中的机器学习》,第133–140页,Springer,2018年。

[21] W. Silva, K. Fernandes, and J. S. Cardoso, “How to produce complementary explanations using an ensemble model,” in 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1–8, IEEE, 2019.

[21] W. Silva,K。Fernandes和JS Cardoso,“如何使用集成模型产生互补的解释”,在2019年国际神经网络联合会议(IJCNN),第1-8页,IEEE,2019年。

[22] A. F. Sequeira, W. Silva, J. R. Pinto, T. Gonc¸alves, and J. S. Cardoso, “Interpretable biometrics: Should we rethink how presentation attack detection is evaluated?,” in 2020 8th International Workshop on Biometrics and Forensics (IWBF), pp. 1–6, IEEE, 2020.

[22] AF Sequeira,W。Silva,JR Pinto,T。Gonc¸alves和JS Cardoso,“可解释的生物识别技术:我们是否应该重新考虑演示文稿攻击检测的评估方式?”,在2020年第8届国际生物识别和取证研讨会(IWBF) ),第1-6页,IEEE,2020年。

[23] W. Silva, A. Poellinger, J. S. Cardoso, and M. Reyes, “Interpretabilityguided content-based medical image retrieval,” in MICCAI, 2020. to be published.

[23] W. Silva,A。Poellinger,JS Cardoso和M. Reyes,“解释性指导的基于内容的医学图像检索”,将于MICCAI出版,2020年。

[24] D. V. de Carvalho, E. M. Pereira, and J. S. Cardoso, “Machine learning interpretability: A survey on methods and metrics,” Electronics (section: Artificial Intelligence), 2019.

[24] DV de Carvalho,EM Pereira和JS Cardoso,“机器学习的可解释性:方法和指标的调查”,电子学(部分:人工智能),2019年。

翻译自: https://medium.com/@inesctec/explainable-artificial-intelligence-unveiling-what-machines-are-learning-91b96a63a07a

android 揭示动画

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值