ai人工智能的数据服务_可解释的AI-它对数据科学家有何影响?

ai人工智能的数据服务

什么是可解释AI(XAI)? (What is Explainable AI (XAI)?)

“My dog accidentally knocked down the trash and found old cheesy pasta in it, and is now convinced that trash cans provide an endless supply of cheesy pasta, knocking it over every chance she gets.”

“我的狗不小心摔倒了垃圾桶,发现里面有旧的俗气的意大利面,现在我坚信垃圾桶可以提供无尽的俗气的意大利面,使每一次碰到它的机会都被击倒。”

Sometimes, you would have seen your Machine Learning(ML) model do the same too.

有时,您会看到机器学习(ML)模型也做同样的事情。

A notorious example is how a neural network learned to differentiate between dogs and wolves. It didn’t truly learn to differentiate between dogs and wolves, instead it learnt that all the wolf pictures had snow in the background as it was their natural habitat as opposed to dogs with grass in the background. The model then differentiated the two animals by looking at whether the background was snow or grass.

一个臭名昭著的例子是神经网络如何学会区分狗和狼。 它并没有真正学会区分狗和狼,而是了解到所有狼照片的背景都是雪,因为它是它们的自然栖息地,而不是背景是草的狗。 然后,该模型通过查看背景是雪还是草地来区分这两种动物。

What if a dog was on snow and a wolf was on grass? The model would make wrong predictions.

如果狗在雪地上而狼在草地上怎么办? 该模型将做出错误的预测。

为什么可解释的AI很重要? (Why is Explainable AI important?)

Why should anyone care about a model classifying dogs and wolves wrongly?

为什么有人会关心模型错误地对狗和狼进行分类?

A simple model wrongly identifying a dog as a wolf is not scary, right? But imagine a self-driving car wrongly identifying a person as an object and running over it.

一个简单的模型错误地将狗识别为狼并不可怕,对吧? 但是,想象一下自动驾驶汽车错误地将一个人识别为物体并在其上方行驶。

While using AI in applications like Self-driving cars and Robot assistants, the machine should not only learn like a human brain but should also be able to explain and reason out its decisions just like humans.

当在无人驾驶汽车和机器人助手等应用程序中使用AI时,机器不仅应该像人的大脑一样学习,而且还应该能够像人一样解释和推理其决策。

Achieving this would be a big leap in the evolution of AI systems and this would enable humans to trust the AI systems better. XAI is a vast subject and one of the hottest topics of AI/ML research in both academia and industries.

实现这一目标将是AI系统发展的一大飞跃,这将使人类能够更好地信任AI系统。 XAI是一门广阔的学科,是学术界和工业界AI / ML研究的最热门主题之一。

卷积神经网络的可解释性 (Explainability of Convolutional Neural Networks)

With a curiosity to understand how the ML models learn, we ventured out to understand how a deep neural network functions.

好奇地了解ML模型的学习方式后,我们冒险去了解深度神经网络的功能。

Neural networks have long been known as “black box models” because of the inability to understand how they work due to the large number of interacting, non-linear parts.

神经网络长期以来被称为“黑匣子模型”,原因是由于存在大量相互作用的非线性部分而无法理解它们的工作原理。

  • Starting out with MNIST digit recognition data set, we built a simple CNN model and visualized how the CNN filters are trained and what feature does every filter identify.

    从MNIST数字识别数据集开始,我们建立了一个简单的CNN模型,并可视化了CNN过滤器的训练方式以及每个过滤器识别的特征。
  • Visualizing activation values in each fully connected layer, and identified the set of neurons that get activated for each digit.

    在每个完全连接的层中可视化激活值,并识别每个数字被激活的神经元集合。
  • Then we used Activation Maximization and Saliency Maps to explain which part of the input image is very critical for the model to classify the input right.

    然后,我们使用激活最大化和显着性图来说明输入图像的哪一部分对于模型对输入权限进行分类至关重要。

Python Deep Learning package Keras offers a number of built-in methods to help you visualize the models. These are already existing techniques and the code can be found in the Python notebook here.

Python深度学习软件包Keras提供了许多内置方法来帮助您可视化模型。 这些已经是现有技术,可以在此处的Python笔记本中找到代码。

Note: We continued our research on understanding CNNs through proprietary methods and code which cannot be shared here.

注意:我们继续进行研究,以通过专有的方法和代码来理解CNN,此处无法共享。

外卖给数据科学家 (Takeaway for a Data Scientist)

  • Although a Data Scientist usually fine tunes the existing ML algorithms to solve a business problem, not treating the ML model as a black-box and trying to understand the way it works would take one a long way in their career.

    尽管数据科学家通常会微调现有的ML算法来解决业务问题,但不将ML模型视为黑盒,而是试图了解它的工作方式将在他们的职业生涯中走很长一段路。
  • This project helped me demystify what is inside a CNN black box and how it works. And I intend to explore concepts like Occlusion Maps and Attention to understand ML models better.

    这个项目帮助我揭开了CNN黑匣子内部的内容及其工作方式的神秘性。 我打算探索“遮挡图”和“注意力”等概念,以更好地理解ML模型。

翻译自: https://medium.com/analytics-vidhya/explainable-ai-what-is-it-why-is-it-important-41f062207235

ai人工智能的数据服务

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值