Interpretability
文章平均质量分 93
Towards Interpretable and Trustworthy Deep Learning
exploreandconquer
统计学在读
展开
-
[Towards Interpretable Deep Learning] Concept-based Models
[可解释深度学习] Concept-based models论文阅读笔记原创 2023-12-18 20:49:48 · 1095 阅读 · 0 评论 -
神经网络机制解释 [NeurIPS 2023 oral]
NeurIPS 2023 oral文章,主题为mechanism explanation of neural networks.原创 2024-04-25 21:51:38 · 906 阅读 · 0 评论 -
Less is More: Fewer Interpretable Region via Submodular Subset Selection (ICLR 2024, oral)
[ICLR 2024 oral] submodular set selection, attribution methods.原创 2024-02-19 00:07:59 · 1536 阅读 · 2 评论 -
Faithful Vision-Language Interpretation via Concept Bottleneck Models (FVLC)
[可解释深度学习] Faithful Vision-Language Interpretation via Concept Bottleneck Models (FVLC)原创 2024-02-18 00:59:26 · 1137 阅读 · 0 评论 -
Towards Robust Interpretability with Self-Explaining Neural Networks (SENN)
[可解释深度学习] Concept-based models-SENN原创 2023-12-26 17:10:26 · 989 阅读 · 0 评论 -
Concept Bottleneck Models (CBM)
[可解释深度学习] Concept-based models-CBM原创 2023-12-26 17:20:24 · 1546 阅读 · 0 评论 -
Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off (CEM)
[可解释深度学习] Concept-based models-CEM原创 2023-12-26 17:26:28 · 845 阅读 · 0 评论 -
Post-hoc Concept Bottleneck Models (PCBM)
[可解释深度学习] Concept-based models-PCBM原创 2023-12-26 17:29:22 · 1053 阅读 · 0 评论 -
Probabilistic Concept Bottleneck Models (ProbCBM)
[可解释深度学习] Concept-based models-ProbCBM原创 2023-12-26 17:37:35 · 863 阅读 · 0 评论 -
Label-Free Concept Bottleneck Models (Label-free CBM)
[可解释深度学习] Concept-based models-Label free CBM原创 2023-12-26 17:33:28 · 1070 阅读 · 2 评论 -
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors
[可解释深度学习] Concept-based models-TCAV原创 2023-12-26 17:14:50 · 1062 阅读 · 0 评论 -
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
[可解释深度学习] 事后可解释模型介绍-Saliency Map原创 2023-12-23 15:09:29 · 947 阅读 · 0 评论 -
Visualizing and Understanding Convolutional Networks
[可解释深度学习] 事后可解释模型介绍-DeconvNet原创 2023-12-23 04:29:08 · 1018 阅读 · 0 评论