深度学习 双目预测深度图
We already covered in a previous post, how important it is to deal with uncertainty in financial Deep Learning forecasts. In this post, we’ll attempt a first introduction on how we deal with explainability.
我们已经在上一篇文章中介绍了处理财务深度学习预测中的不确定性有多重要。 在本文中,我们将尝试介绍如何处理可解释性。
Neural networks have been applied to various tasks including stock price prediction. Although highly successfully, these models are frequently treated as black boxes. In most cases we know that the performance on the test data is satisfying, but we do not know why the model came up with a specific output.
神经网络已应用于各种任务,包括股价预测。 尽管非常成功,但这些模型经常被视为黑匣子。 在大多数情况下,我们知道测试数据的性能令人满意,但是我们不知道为什么模型会给出特定的输出。
There are cases where an “explanation” of the model’s conclusion is desirable, if not necessary. Examples where explainability is a practical necessity include the diagnosis of medical conditions and the control of self driving cars. In the first situation an explanation of the model’s output can help the expert make the final decision and decide whether to trust the prediction or not. In the latter, such an “explanation” can help identify faulty decision making processes that can have disastrous outcomes.
在某些情况下,如果不需要,可以对模型的结论进行“解释”。 可解释性是实际必要的例子包括医疗状况的诊断和自动驾驶汽车的控制。 在第