InterpretDL实现ResNet的可解释性分析

Interpreting ResNet Model With LRP(Layer-wise Relevance Propagation)

使用Layer-wise Relevance PropagationResNetVGG网络进行可解释的分析。

Star and Fork!

本项目的所有代码和tutorials均源于PaddlePaddle的InterpretDL仓库,欢迎大家StarFork

InterpretDL: Interpretation of Deep Learning Models based on PaddlePaddle

InterpretDL, short for interpretations of deep learning models, is a model interpretation toolkit for PaddlePaddle models. This toolkit contains implementations of many interpretation algorithms, including LIME, Grad-CAM, Integrated Gradients and more. Some SOTA and new interpretation algorithms are also implemented.

InterpretDL is under active construction and all contributions are welcome!

克隆InterpretDL仓库

# 下载比较慢 附上了压缩包
# !git clone https://github.com/PaddlePaddle/InterpretDL.git

%cd ~
! unzip -oq data/data98915/InterpretDL.zip
%cd InterpretDL/tutorials
/home/aistudio
/home/aistudio/InterpretDL/tutorials
# 安装依赖
!pip install scikit-image
!python setup.py install

Layer-wise Relevance Propagation (LRP) is an explanation technique applicable to models structured as neural networks, where inputs can be e.g. images, videos, or text.

LRP operates by propagating the prediction f ( x ) f(x) f(x) backwards in the neural network, by means of purposely designed local propagation rules.

LRP rules could be easily expressed in terms of matrix-vector operations. In practice, state-of-the-art neural networks such as ResNet make use of more complex layers such as convolutions and pooling.

In this case, LRP rules are more conveniently implemented by casting the operations of the four-step procedure above as forward and gradient evaluations on these layers. These operations are readily available in neural network frameworks such as PaddlePaddle and Pytorch, and can therefore be reused for the purpose of implementing LRP.

Here, we take the ResNet pretrained network for image classification.

LRP Rules for Deep Rectifier Networks

Basic Rule (LRP-0)

This rule redistributes in proportion to the contributions of each input to the neuron activation as they occur in Eq. below:
R j = ∑ k a j w j k ∑ 0 , j a j w j k R k R_j = \sum_k{\frac{a_j w_{jk}}{\sum_{0, j}{a_jw_{jk}}}R_k} Rj=k0,jajwjkajwjkRk

Epsilon Rule (LRP- ϵ \epsilon ϵ)

A first enhancement of the basic LRP-0 rule consists of adding a small positive term in the denominator:
R j = ∑ k a j w j k ϵ + ∑ 0 , j a j w j k R k R_j = \sum_k{\frac{a_j w_{jk}}{\epsilon + \sum_{0, j}{a_jw_{jk}}}R_k} Rj=kϵ+0,jajw

  • 1
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 4
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值