项目风险管理 已知未知_用深度神经网络了解已知的未知数

项目风险管理 已知未知

Deep neural networks (DNNs) are easy-to-implement, versatile machine learning models that can achieve state-of-the-art performance in many domains (for example, computer vision, natural language processing, speech recognition, recommendation systems). DNNs, however, are not perfect. You can read any number of articles, blog posts, and books discussing the various problems with supervised deep learning. In this article we’ll focus on a (relatively) narrow but major issue: the inability for a standard DNN to reliably show when it is uncertain about a prediction. For a Rumsfeldian take on it: The inability of DNNs to know “known unknowns.”

深度神经网络 (DNN)是易于实现的通用机器学习模型,可以在许多领域(例如, 计算机视觉自然语言处理语音识别推荐系统 )实现最先进的性能。 但是,DNN并不完美。 您可以阅读任意数量 文章博客文章书籍 ,这些 文章讨论了有监督深度学习的各种问题。 在本文中,我们将关注(相对)狭窄但主要的问题:当不确定预测时,标准DNN无法可靠显示。 对于拉姆斯菲尔德式的看法:DNN无法知道“已知的未知数”。

As a simple example of this failure mode in DNNs, consider training a DNN for a binary classification task. You might reasonably presume that the softmax (or sigmoid) output of a DNN could be used to measure how certain or uncertain the DNN is in its prediction; you would expect that seeing a softmax output close to 0 or 1 would indicate certainty, and an output close to 0.5 would indicate uncertainty. In reality, the softmax outputs are rarely close to 0.5 and are, more frequently than not, close to 0 or 1 regardless of whether the DNN is making a correct prediction. Unfortunately, this fact makes naive uncertainty estimates unreliable (for instance, entropy over the softmax outputs).

作为DNN中这种故障模式的简单示例,请考虑为二进制分类任务训练DNN。 您可能会合理地假设DNN的softmax(或S型)输出可用于衡量DNN在其预测中的确定性或不确定性。 您可能希望看到softmax输出接近0或1表示确定性,而接近0.5的输出表示不确定性。 实际上,无论DNN是否做出正确的预测, softmax的输出很少接近0.5 ,更经常地接近0或1。 不幸的是,这一事实使幼稚的不确定性估计不可靠(例如,对softmax输出的 )。

To be fair, uncertainty estimates are not needed for every application of a DNN. If a social media company uses a DNN to detect faces in images so that its users can more easily tag their friends, and the DNN fails, then the failure of the method is nearly inconsequential. A user might be slightly inconvenienced, but in low-stakes environments like social media or advertising, uncertainty estimates aren’t vital to creating value from a DNN.

公平地说,对于DNN的每次应用都不需要不确定性估计。 如果社交媒体公司使用DNN来检测图像中的面部,以便其用户可以更轻松地标记其朋友,而DNN失败了,则该方法的失败几乎是无关紧要的。 用户可能会有点不方便,但是在社交媒体或广告等低风险环境中,不确定性估计对于从DNN创造价值并不重要。

In high-stakes environments, however, like self-driving cars, health care, or military applications, a measure of how uncertain the DNN is in its prediction could be vital. Uncertainty measurements can reduce the risk of deploying a model because they can alert a user to the fact that a scenario is either inherently difficult to do prediction in, or the scenario has not been seen by the model before.

但是,在自动驾驶汽车,医疗保健或军事应用等高风险环境中,衡量DNN预测的不确定性可能至关重要。 不确定性度量可以降低用户部署模型的风险,因为它们可以提醒用户注意以下事实:场景本身就很难进行预测,或者场景以前未被模型看到。

In a self-driving car, it seems plausible that a DNN should be more uncertain about predictions at night (at least in the measurements coming from optical cameras) because of the lower signal-to-noise ratio. In health care, a DNN that diagnoses skin cancer should be more uncertain if it were shown a particularly blurry image, especially if the model had not seen such blurry examples in the training set. In a model to segment satellite imagery, a DNN should be more uncertain if an adversary changed how they disguise certain military installations. If the uncertainty inherent in these situations were relayed to the user, the information could be used to change the behavior of the system in a safer way.

在自动驾驶汽车中,由于信噪比较低,DNN对于夜间的预测(至少在来自光学相机的测量中)应该更加不确定。 在医疗保健中,诊断出皮肤癌的DNN是否应该显示出特别模糊的图像,尤其是如果模型在训练集中没有看到这样模糊的例子时,应该更加不确定。 在分割卫星图像的模型中,如果对手改变了他们掩饰某些军事设施的方式,则DNN应该更加不确定。 如果将这些情况下固有的不确定性传达给用户,则可以使用该信息以更安全的方式更改系统的行为。

In this article, we explore how to estimate two types of statistical uncertainty alongside a prediction in a DNN. We first discuss the definition of both types of uncertainty, and then we highlight one popular and easy-to-implement technique to estimate these types of uncertainty. Finally, we show and implement some examples for both classification and regression that makes use of these uncertainty estimates.

在本文中,我们探讨了如何在DNN中与预测一起估计两种类型的统计不确定性。 我们首先讨论两种不确定性的定义,然后重点介绍一种流行且易于实现的技术来估计这些不确定性。 最后,我们展示并实现一些利用这些不确定性估计的分类和回归示例。

For those who are most interested in looking at code examples, here are two Jupyter Notebooks one with a toy regression example and the other with a toy classification example. There are also PyTorch-based code snippets in the “Examples and Applications” section below.

对于那些对查看代码示例最感兴趣的人,这里有两本Jupyter Notebooks,一个带有玩具回归示例 ,另一个带有玩具分类示例 。 下面的“示例和应用程序”部分中还有基于PyTorch的代码段。

“不确定性”是什么意思? (What do we mean by ‘uncertainty’?)

Uncertainty is defined by the Cambridge Dictionary as: “a situation in which something is not known.” There are several reasons why something may not be known, and — taking a statistical perspective — we will discuss two types of uncertainty called aleatory (sometimes referred to as aleatoric) and epistemic uncertainty.

剑桥词典将不确定性定义为:“一种未知的情况。” 有几个原因的东西可能不知道,和-以统计学角度-我们将讨论两种类型的不确定性称为偶然 (有时被称为肆意 )和认知的不确定性。

Aleatory uncertainty relates to an objective or physical concept of uncertainty — it is a type of uncertainty that is intrinsic to the data-generating process. Since aleatory uncertainty has to do with an intrinsic quality of the data, we presume it cannot be decreased by collecting more data; that is, it is irreducible.

不确定性与客观性或物理性不确定性有关,它是数据生成过程固有的一种不确定性。 由于偶然的不确定性与数据的内在质量有关,我们假设不能通过收集更多的数据来减少不确定性。 也就是说,它是不可约的

Aleatory uncertainty can be explained best with a simple example: Suppose we have a coin which has some positive probability of being heads or tails. Then, even if the coin is biased, we cannot predict — with certainty — what the next toss will be, regardless of how many observations we make. (For instance, if the coin is biased such that heads turn up with probability 0.9, we might reasonably guess that heads will show up in the next toss, but we cannot be certain that it will happen.)

可以用一个简单的例子来最好地解释灵活性不确定性:假设我们有一枚硬币的正面或反面的概率为正。 然后,即使硬币有偏差,我们也无法确定地预测下一次抛掷将是什么,无论我们进行了多少观察。 (例如,如果硬币偏向使得正面朝上的概率为0.9,我们可能会合理地猜测正面朝下出现时会出现正面,但我们不能肯定会发生。)

Epistemic uncertainty relates to a subjective or personal concept of uncertainty — it is a type of uncertainty due to knowledge or ignorance of the true data-generating process. Since this type of uncertainty has to do with knowledge, we presume that it can be decreased (for example, when more data has been collected and used for training); that is, it is reducible.

认知不确定性与不确定性的主观或个人概念有关,由于不确定性或对真实数据生成过程的了解,它是一种不确定性。 由于这种不确定性与知识有关,我们假定可以减少不确定性(例如,当收集到更多数据并将其用于训练时); 也就是说,它是可还原的。

Epistemic uncertainty can be explained with a regression example. Suppose we are fitting a linear regression model and we have independent variables x between -1 and 1, and corresponding dependent variables y for all x. Suppose we chose a linear model because we believe that when x is between -1 and 1, the model is linear. We don’t, however, know what happens when a test sample x* is far outside this range; say at x* = 100. So, in this scenario, there is uncertainty about the model specification (for example, the true function may be quadratic) and there is uncertainty because the model hasn’t seen data in the range of the test sample. These uncertainties can be bundled into uncertainty regarding the knowledge of true data-generating distribution, which is epistemic uncertainty.

认知不确定性可以通过回归示例进行解释。 假设我们拟合线性回归模型,并且我们拥有介于-1和1之间的自变量x ,并且所有x都有对应的因变量y 。 假设我们选择线性模型是因为我们相信,当x在-1和1之间时,该模型是线性的。 但是,我们不知道当测试样本x *远远超出此范围时会发生什么。 假设x * =100。因此,在这种情况下,模型规格存在不确定性(例如,真实函数可能是二次函数),并且存在不确定性,因为模型没有看到测试样本范围内的数据。 这些不确定性可以归结为关于真实数据生成分布的知识的不确定性,即认知不确定性。

The terms aleatory and epistemic, with regards to probability and uncertainty, seem to have been brought into the modern lexicon by Ian Hacking in his book “The Emergence of Probability,” which discusses the history of probability from 1600–1750. The terms are not clear for the uninitiated reader, but their definitions are related to the deepest question in the foundations of probability and statistics: What does probability mean? If you are familiar with terms frequentist and Bayesian, then you will see the respective relationship between aleatory (objective) and epistemic (subjective) uncertainty. I’m not about to solve this philosophical issue in this blog post, but know that the definitions of aleatory and epistemic uncertainty are nuanced, and what falls into which category is debatable. For a more comprehensive (but still applied) review of these terms, take a look at the article: “Aleatory or Epistemic? Does it matter?

伊恩·哈金(Ian Hacking)在他的书“ 概率的出现 ”中讨论了关于概率和不确定性的术语“ 偶然性认识论 ”,这些词讨论了1600-1750年的概率历史。 对于初学者来说,这些术语不清楚,但是它们的定义与概率和统计基础中最深层的问题有关: 概率是什么意思 ? 如果您熟悉“ 常客”和“ 贝叶斯”一词 ,那么您将看到偶然的(客观的)和认知的(主观的)不确定性之间的关系。 我不会在此博客文章中解决这个哲学问题,但知道偶然性和认知不确定性的定义之间存在细微差别,并且属于哪种类别值得商de。 要对这些术语进行更全面(但仍适用)的评论,请参阅以下文章:“是通气还是认知? 有关系吗?

Why is it important to distinguish between aleatory and epistemic uncertainty? Suppose we are developing a self-driving car, and we take a prototype that was trained on normal roads and have it drive through the Monza racing track, which has extremely banked turns.

为什么区分偶然不确定性和认知不确定性很重要? 假设我们正在开发一种自动驾驶汽车,并且我们采用了在正常道路上经过训练的原型,并使其驶过具有极大倾斜度的Monza赛道

Since the car hasn’t seen the situation before, we would expect the image segmentation DNN in the self-driving car, for example, to be uncertain because it has never seen the sky nearly to the left of ground. In this case, the uncertainty would be classified as epistemic because the DNN doesn’t have knowledge of roads like this.

由于汽车从未见过这种情况,因此,例如,我们希望自动驾驶汽车中的图像分割DNN不确定,因为它从未见过接近地面的天空。 在这种情况下,由于DNN不了解此类道路,因此不确定性将被归类为认知性。

Suppose instead that we take the same self-driving car and take it for a drive on a rainy day; assume that the DNN has been trained on lots of rainy-day conditions. In this situation, there is more uncertainty about objects on the road simply due to lower visibility. In this case, the uncertainty would be classified as aleatory because there is inherently more randomness in the data.

假设我们乘同一辆自动驾驶汽车在下雨天开车去; 假设DNN已经在很多雨天条件下进行了训练。 在这种情况下,仅由于可见度较低,道路上物体的不确定性就会增加。 在这种情况下,不确定性将被归类为偶然的,因为数据中固有地存在更多的随机性。

These two situations should be dealt with differently. In the race track, the uncertainty could tell the developers that they need to gather a particular type of training data to make the model more robust, or the uncertainty could tell the car could try to safely maneuver to a location where it can hand-off control to the driver. In the rainy-day situation, the uncertainty could alert the system to simply slow down or enable certain safety features.

这两种情况应以不同的方式处理。 在赛道上,不确定性可能会告诉开发人员他们需要收集特定类型的训练数据以使模型更可靠,或者不确定性可能会告诉汽车可以尝试安全地操纵到可以移交的位置控制给驾驶员。 在雨天情况下,不确定性可能会提醒系统仅放慢速度或启用某些安全功能。

估计DNN中的不确定性 (Estimating uncertainty in DNNs)

There has been a cornucopia of proposed methods to estimate uncertainty in DNNS in recent years. Generally, uncertainty estimation is formulated in the context of Bayesian statistics. In a standard DNN for classification, we are implicitly training a discriminative model where we obtain maximum-likelihood estimates of the neural network weights (depending on the loss function chosen to train the network). This point-estimate of the network weights is not amenable to understanding what the model knows and does not know. If we instead find a distribution over the weights, as opposed to the point-estimate, we can sample network weights with which we can compute corresponding outputs.

已经有一个聚宝盆建议 方法估算近年来DNNS不确定性。 通常,不确定性估计是在贝叶斯统计的背景下制定的。 在用于分类的标准DNN中,我们隐式地训练一个判别模型,在该模型中,我们获得神经网络权重的最大似然估计(取决于选择用来训练网络的损失函数)。 网络权重的此点估计不适合理解模型知道和不知道的内容。 如果取而代之的是找到权重的分布(与点估计相反),则可以对网络权重进行采样,并据此计算相应的输出。

Intuitively, this sampling of network weights is like creating an ensemble of networks to do the task: We sample a set of “experts” to make a prediction. If the experts are inconsistent, there is high epistemic uncertainty. If the experts think it is too difficult to make an accurate prediction, there is high aleatory uncertainty.

直观地,对网络权重的采样就像创建一组网络来完成任务一样:我们对一组“专家”进行采样以进行预测。 如果专家不一致,则存在较高的认知不确定性。 如果专家认为很难做出准确的预测,则不确定性很高。

In this article, we’ll take a look at a popular and easy-to-implement method to estimate uncertainty in DNNs by Yarin Gal and Zoubin Ghahramani. They showed that dropout can be used to learn an approximate distribution over the weights of a DNN (as previously discussed). Then, during prediction, dropout is used to sample weights from this fitted approximate distribution — akin to creating the ensemble of experts.

在本文中,我们将介绍Yarin Gal和Zoubin Ghahramani估算DNN中不确定性的一种流行且易于实现的方法。 他们表明, 辍学可用于学习DNN权重的近似分布(如前所述)。 然后,在预测过程中,使用落差法从该拟合的近似分布中采样权重-类似于创建专家组。

Epistemic uncertainty is estimated by taking the sample variance of the predictions from the sampled weights. The intuition behind relating sample variance to epistemic uncertainty is that the sample variance will be low when the model predicts nearly identical outputs, and it will be high when the model makes inconsistent predictions; this is akin to when the set of experts consistently makes a prediction and when they do not, respectively.

通过从样本权重中获取预测的样本方差来估计认知不确定性。 将样本方差与认知不确定性相关联的直觉是,当模型预测几乎相同的输出时,样本方差将很小;而当模型做出不一致的预测时,样本方差将会很大。 这类似于专家组始终如一地做出预测以及何时他们没有做出预测。

Simultaneously, aleatory uncertainty is estimated by modifying a DNN to have a second output, as well as using a modified loss function. Aleatory uncertainty will correspond to the estimated variance of the output. This predicted variance has to do with an intrinsic quantity of the data, which is why it is related to aleatory uncertainty; this is akin to when the set of experts judges the situation too difficult to make a prediction.

同时,通过修改DNN使其具有第二个输出以及使用修改后的损失函数来估算不确定性。 不确定性将对应于输出估计方差 。 这种预测的方差与数据的固有数量有关,这就是为什么它与偶然的不确定性有关; 这类似于专家组判断情况太难于做出预测的时候。

Altogether the final network structure is something like what is shown in Fig. 2. There is an input x which is fed to a DNN with dropout after every layer (dropout after every layer is what is originally specified, but — in practice — dropout after every layer often makes training too difficult). The output of this DNN is an estimated target ŷ and an estimated variance or scale parameter σ̂.

总的来说,最终的网络结构类似于图2所示。存在一个输入x ,该输入x馈送到DNN,每一层之后都有丢失(每层之后的丢失是最初指定的,但实际上是-每一层通常会使训练变得困难)。 该DNN的输出是估计目标target和估计方差或比例参数σ̂

Image for post
Fig. 2: Example of DNN architecture with capability to estimate aleatory and epistemic uncertainty 图2 :具有估算不确定性和认知不确定性的DNN架构示例

This DNN is trained with a loss function like:

该DNN的损失函数如下所示:

Image for post

or

要么

Image for post

If the network is being trained for a regression task. The first loss function shown above is an MSE variant with uncertainty, whereas the second is an L1 variant. These are derived from assuming a Gaussian and Laplace distribution for the likelihood, respectively, where each component is independent and the variance (or scale parameter) is estimated and fitted by the network.

如果网络正在接受回归任务培训。 上面显示的第一个损失函数是具有不确定性的MSE变量,而第二个是L1变量。 这些分别来自假设似然的高斯分布和拉普拉斯分布,其中每个分量都是独立的,并且方差(或比例参数)由网络估算和拟合。

As mentioned above, these loss functions have mathematical derivations, but we can intuit why this variance parameter captures a type of uncertainty: The variance parameter provides a trade-off between the variance and the MSE or L1 loss term. If the DNN can easily estimate the true value of the target (that is, get ŷ close to the true y), then the DNN should estimate a low variance term on that so as to minimize the loss. If, however, the DNN cannot estimate the true value of the target (for example, there is low signal-to-noise ratio), then the network can minimize the loss by estimating a high variance. This will reduce the MSE or L1 loss term because that term will be divided by the variance; however, the network should not always do this because of the log variance term which penalizes high variance estimates.

如上所述,这些损失函数具有数学推导,但我们可以理解为什么此方差参数捕获了不确定性类型:方差参数提供了方差与MSE或L1损失项之间的折衷。 如果DNN可以轻松估算目标的真实值(即,使ŷ接近真实y ),则DNN应该对此估算一个低方差项,以最大程度地减少损失。 但是,如果DNN无法估计目标的真实值(例如,信噪比低),则网络可以通过估计高方差来最大程度地减少损失。 这将减少MSE或L1损失项,因为该项将除以方差; 但是,由于对数方差项会惩罚高方差估计,因此网络不应该总是这样做。

If the network is being trained for a classification (or segmentation) task, the loss would look something like this two-part loss function:

如果正在训练网络进行分类(或分段)任务,则损失看起来类似于以下两部分的损失函数:

Image for post
Image for post

The intuition here with this loss function is: When the DNN can easily estimate the right class of a component, the value ŷ will be high for that class and the DNN should estimate a low variance so as to minimize the added noise (so that all samples will be concentrated around the correct class). If, however, the DNN cannot easily estimate the class of the component, the ŷ value should be low and adding noise can increase the guess, by chance, for the correct class which can overall minimize the loss function. (See Pg. 41 of Alex Kendall’s thesis for more discussion on this loss function.)

这种损失函数的直觉是:当DNN可以轻松估计组件的正确类别时,该类别的值ŷ将会很高,而DNN应该估计出较低的方差,从而最大程度地减少了添加的噪声(因此,所有样本将集中在正确的类别上)。 但是,如果DNN无法轻松估计组件的类别,则ŷ值应较低,并且增加噪声可能会偶然增加对正确类别的猜测,该猜测可以使损失函数总体上最小。 (有关此损失函数的更多讨论,请参见Alex Kendall论文的第41页 。)

Finally, in testing, the network is sampled T times to create T estimated targets and T estimated variance outputs. These T outputs are then combined in various ways to make the final estimated target and uncertainty estimates as shown in Fig. 3.

最后,在测试中,对网络采样T次,以创建T个估计目标和T个估计方差输出。 然后,以各种方式将这些T输出进行组合,以得出最终的估计目标和不确定性估计,如图3所示。

Image for post
Fig. 3: DNN output to final estimated target and uncertainty estimates 图3 :DNN输出到最终估计的目标和不确定性估计

Mathematically, the epistemic and aleatory uncertainty are (for the MSE regression variant):

从数学上讲,认知和不确定不确定性是(对于MSE回归变量):

Image for post
Image for post

There are various interpretations of epistemic uncertainty for the classification case: entropy, sample variance, mutual information. Each has been shown to be useful in its own right, and the choice of what type to choose will be application dependent.

对于分类情况,认知不确定性有多种解释: 样本方差互信息 。 事实证明每种方法都有其自身的作用,选择哪种类型取决于应用程序。

实例与应用 (Examples and applications)

To make the theory more concrete, we’ll go through two toy examples for estimating uncertainty with DNNs in a regression and classification task with PyTorch. The code below are excerpts from full implementations which are available in Jupyter notebooks (mentioned at the beginning of the next two subsections). Finally, we’ll discuss calculating uncertainty in a real-world data example with medical images.

为了使理论更具体,我们将通过两个玩具示例,使用PyTorch在回归和分类任务中估算DNN的不确定性。 下面的代码是Jupyter笔记本中可用的完整实现的摘录(在下两个小节的开头提到)。 最后,我们将讨论带有医学图像的真实数据示例中的不确定度的计算。

回归示例 (Regression example)

In the regression notebook, we fit a very simple neural network — consisting of two fully-connected layers with dropout on the hidden layer — to one-dimensional input and output data with the MSE variant of the uncertainty loss (implemented below).

回归笔记本中 ,我们将一个非常简单的神经网络(由两个完全连接的层组成,在隐藏层上具有辍学功能)与具有不确定性损失的MSE变量的一维输入和输出数据配合使用(如下所述)。

Note that instead of fitting the variance term directly, we fit the log of the variance term for numerical stability.

注意,不是直接拟合方差项,而是拟合方差项的对数以实现数值稳定性。

In the regression scenario, we could also use the L1 variant of the uncertainty loss which is in the notebook and implemented below.

在回归方案中,我们还可以使用不确定性损失的L1变体,该变体位于笔记本电脑中,并在下面实现。

Sometimes using L1 loss instead of MSE loss results in better performance for regression tasks, although this is application dependent.

有时使用L1损失而不是MSE损失可以使回归任务获得更好的性能 ,尽管这取决于应用程序。

The aleatory and epistemic uncertainty estimates in this scenario are then computed as in the implementation below (see the notebook for more context).

然后,按照下面的实现方式计算这种情况下的偶然性和认知不确定性估计(有关更多上下文,请参阅笔记本)。

In Fig. 4, we visualize the fit function and the uncertainty results. In the plot to the far right, we show the thresholded epistemic uncertainty which demonstrates the capabilities of uncertainty estimates to detect out-of-distribution data (at least in this toy scenario).

在图4中,我们将拟合函数和不确定性结果可视化。 在最右边的图中,我们显示了阈值的认知不确定性,该阈值证明了不确定性估计功能能够检测出分布不正确的数据(至少在此玩具场景中如此)。

Image for post
Image for post
Fig. 4: Various types of uncertainty for a regression example. Original training data in orange. The two plots on the left show the function fit by the neural network in blue with aleatory and epistemic uncertainty in the first and second plot, respectively. The plot on the far right shows a thresholded epistemic uncertainty. See the 图4 :回归示例的各种类型的不确定性。 原始训练数据为橙色。 左侧的两个图分别以蓝色和蓝色表示神经网络的函数拟合,并且在第一和第二个图中分别具有偶然性和认知不确定性。 最右边的图显示了阈值的认知不确定性。 有关完整的实现,请参见 Jupyter Notebook for the full implementation. Jupyter Notebook

分类例 (Classification example)

In the classification notebook, we, again, fit a neural network composed of two fully-connected layers with dropout on the hidden layer. In this case, we are trying to do binary classification. Consequently, the loss function is as implemented below.

分类笔记本中 ,我们再次拟合一个由两个完全连接的层组成的神经网络,在隐藏层上具有辍学功能。 在这种情况下,我们尝试进行二进制分类。 因此,损失函数如下所述。

There are numerous uncertainty estimates we could compute in this scenario. In the below implementation, we calculate epistemic, entropy, and aleatory uncertainty. Entropy could reasonably be argued to belong to one of aleatory and epistemic uncertainty, but below it is separated out so that aleatory and epistemic uncertainty are calculated as previously described.

在这种情况下,我们可以计算出许多不确定性估计。 在下面的实现中,我们计算认知,熵和偶然不确定性。 可以合理地认为熵属于偶然性和认知不确定性之一,但是在下面将其分离出来,以便如前所述计算偶然性和认知不确定性。

In Fig. 5, we visualize the resulting epistemic and aleatory uncertainty, as well as entropy, over the training data. As we can see the training data classes overlaps near zero, and the uncertainty measures peak there. In this toy example, all three measures of uncertainty are highly correlated. Discussion as to why is provided in the notebook for the interested reader.

在图5中,我们在训练数据上可视化了由此产生的认知和偶然不确定性以及熵。 如我们所见,训练数据类别在零附近重叠,不确定性指标在此达到峰值。 在这个玩具示例中,所有三个不确定性度量都高度相关。 笔记本中为感兴趣的读者提供了有关为什么的讨论。

Image for post
Fig. 5: Various measures of uncertainty for a binary classification example. See the 图5 :二进制分类示例的各种不确定性度量。 有关完整的实现,请参见 Jupyter Notebook for the full implementation. Jupyter Notebook

医学图像示例 (Medical image example)

In this last example, I’ll show some results and applications of uncertainty in a real-world example published as a conference paper (pre-print here). The task explored is an image-to-image translation task, akin to the notable pix2pix example, but with medical images. In this case, we wanted to make a computed tomography (CT) image of the brain look like the corresponding magnetic resonance (MR) image of the brain. This is a regression loss and we used the MSE variant of the uncertainty loss to train a U-Net modified to have spatial dropout (see here for a discussion as to why spatial dropout) after every layer, and to output two images instead of only one; one output is the estimated MR image and the other is the pixel-wise variance.

在最后一个示例中,我将在发布为会议论文的真实示例中展示一些不确定性的结果和应用( 此处为预印本 )。 探索的任务是图像到图像的翻译任务,类似于著名的pix2pix示例,但是具有医学图像。 在这种情况下,我们希望使大脑的计算机断层扫描(CT)图像看起来像大脑的相应磁共振(MR)图像。 这是一个回归的损失,我们所使用的不确定性损失的MSE变种来训练掌中修改为具有空间辍学 (见这里的,为什么空间辍学的讨论)每一层后,并输出两幅图像,而不是只之一; 一个输出是估计的MR图像,另一个是像素方差。

Example inputs and outputs are shown in Fig. 6. The CT image on the far left has an anomaly in the left hemisphere of the occipital lobe (lower-left of the brain in the image; it is more easily visualized in the corresponding MR image to the right). The DNN was only trained on healthy images, so the DNN should be ignorant of such anomalous data, and it should reflect this — according to the theory of epistemic uncertainty as previously discussed — by having high sample variance (that is, high epistemic uncertainty) in that region.

输入和输出示例如图6所示。最左侧的CT图像在枕叶的左半球有异常(图像的大脑左下;在相应的MR图像中更容易可视化)在右边)。 DNN仅在健康的图像上进行训练,因此DNN应该不了解此类异常数据,并且应根据先前讨论的认知不确定性理论,通过具有较高的样本方差(即较高的认知不确定性)来反映这一点。在那个地区。

Image for post
Fig. 6: From left to right — CT image is input to the network, MR image is target in training (shown here because anomaly visible in the occipital lobe in the left hemisphere), pixel-wise epistemic uncertainty, pixel-wise aleatory uncertainty, and the pixel-wise ratio of epistemic over aleatory is shown under the title “Scibilic” which clearly highlights the anomalous region. 图6 :从左到右-CT图像输入到网络,MR图像是训练的目标(此处显示,因为在左半球的枕叶中可见异常),逐象素的认知不确定性,逐象素的不确定性不确定性,在标题“科学”下显示了认知与偶然之间的逐像素比例,这清楚地突出了异常区域。

When this image was input to the network, we calculated the epistemic and aleatory uncertainty. The anomaly is clearly highlighted in the epistemic uncertainty, but there are many other regions which are also predicted to have high epistemic uncertainty. If we take the pixel-wise ratio of epistemic and aleatory uncertainty, we get the image shown on the far-right, labeled “Scibilic” (which is discussed more in the pre-print). This image is easily thresholded to predict the anomaly (the out-of-distribution region of the image).

当此图像输入到网络时,我们计算了认知和偶然不确定性。 异常不确定性在认知不确定性中得到了明显强调,但也有许多其他地区的预测不确定性也很高。 如果采用认知和不确定不确定性的像素比例比率,则图像将显示在最右侧,并标记为“ Scibilic”(在预印本中有更多讨论)。 该图像很容易被阈值化以预测异常(图像的分布范围外)。

This method of anomaly detection is by no means foolproof. It is quite fickle actually, but it shows a way to apply this type of uncertainty estimation for real-world data.

这种异常检测方法绝非万无一失。 实际上,它非常善变,但是它显示了一种将这种类型的不确定性估计应用于实际数据的方法。

外卖 (Takeaways)

Uncertainty estimates in machine learning have the potential to reduce the risk of deploying models in high-stakes scenarios. Aleatory and epistemic uncertainty estimates can show the user or developer different information about the performance of a DNN and can be used to modify the system for better safety. We discussed and implemented one approach to uncertainty estimation with dropout. The approach is not perfect, dropout-based uncertainty provides a way to get some — often reasonable — measure of uncertainty. Whether the measure is trustworthy enough to be used in deployment is another matter. The question practitioners should ask themselves when implementing this method is whether the resulting model with uncertainty estimates is more useful — for example, safer — than a model without uncertainty estimates.

机器学习中的不确定性估计有可能降低在高风险场景中部署模型的风险。 运动和认知不确定性估计可以向用户或开发人员显示有关DNN性能的不同信息,并可用于修改系统以提高安全性。 我们讨论并实现了一种带有遗漏的不确定性估计的方法。 这种方法不是完美的,基于辍学的不确定性提供了一种获得不确定性(通常是合理的)度量的方法。 该措施是否足够值得信赖以用于部署是另一回事。 在实施此方法时,从业人员应该问自己的问题是,带有不确定性估计的结果模型是否比没有不确定性估计的模型更有用(例如,更安全)。

翻译自: https://towardsdatascience.com/knowing-known-unknowns-with-deep-neural-networks-caac1c4c1f5d

项目风险管理 已知未知

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值