自定义博客皮肤VIP专享

*博客头图:

格式为PNG、JPG,宽度*高度大于1920*100像素,不超过2MB,主视觉建议放在右侧,请参照线上博客头图

请上传大于1920*100像素的图片!

博客底图:

图片格式为PNG、JPG,不超过1MB,可上下左右平铺至整个背景

栏目图:

图片格式为PNG、JPG,图片宽度*高度为300*38像素,不超过0.5MB

主标题颜色:

RGB颜色,例如:#AFAFAF

Hover:

RGB颜色,例如:#AFAFAF

副标题颜色:

RGB颜色,例如:#AFAFAF

自定义博客皮肤

-+
  • 博客(22)
  • 收藏
  • 关注

原创 神经网络不确定性综述(Part V)——Uncertainty measures and quality

(Survey Part V) Uncertainty measures and quality

2024-05-22 13:24:13 835

原创 神经网络不确定性综述(Part IV)——Uncertainty estimation_Ensemble methods&Test-time augmentation

(Survey Part IV) Uncertainty estimation-Ensemble&Test-time augmentation

2024-05-22 13:02:57 719

原创 神经网络不确定性综述(Part III)——Uncertainty estimation_Bayesian neural networks

(Survey Part III) Uncertainty estimation-BNN

2024-05-22 12:54:01 960

原创 神经网络不确定性综述(Part II)——Uncertainty estimation_Single deterministic methods

(Survey Part II) Uncertainty estimation-Deterministic methods

2024-05-22 12:33:05 1330

原创 神经网络不确定性综述(Part I)——A survey of uncertainty in deep neural networks

随着神经网络技术在现实世界中的应用不断广泛,神经网络预测置信度变得越来越重要,尤其是在医学图像分析与自动驾驶等高风险领域。然而,最基本的神经网络并不包含置信度估计的过程,并且通常面临着over-confidence或者under-confidence的问题。针对此问题,研究人员开始关注于量化神经网络预测中存在的uncertainty,由此定义了不同类型、不同来源的uncertainty以及量化uncertainty的技术。

2024-05-22 11:08:50 1547

原创 神经网络机制解释 [NeurIPS 2023 oral]

NeurIPS 2023 oral文章,主题为mechanism explanation of neural networks.

2024-04-25 21:51:38 877

原创 Less is More: Fewer Interpretable Region via Submodular Subset Selection (ICLR 2024, oral)

[ICLR 2024 oral] submodular set selection, attribution methods.

2024-02-19 00:07:59 1475 2

原创 Faithful Vision-Language Interpretation via Concept Bottleneck Models (FVLC)

[可解释深度学习] Faithful Vision-Language Interpretation via Concept Bottleneck Models (FVLC)

2024-02-18 00:59:26 1083

原创 Evidential Deep Learning to Quantify Classification Uncertainty

Evidetial Deep Learning经典论文

2023-12-29 15:09:08 1174

原创 Probabilistic Concept Bottleneck Models (ProbCBM)

[可解释深度学习] Concept-based models-ProbCBM

2023-12-26 17:37:35 824

原创 Label-Free Concept Bottleneck Models (Label-free CBM)

[可解释深度学习] Concept-based models-Label free CBM

2023-12-26 17:33:28 1007 2

原创 Post-hoc Concept Bottleneck Models (PCBM)

[可解释深度学习] Concept-based models-PCBM

2023-12-26 17:29:22 988

原创 Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off (CEM)

[可解释深度学习] Concept-based models-CEM

2023-12-26 17:26:28 808

原创 Concept Bottleneck Models (CBM)

[可解释深度学习] Concept-based models-CBM

2023-12-26 17:20:24 1338

原创 Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors

[可解释深度学习] Concept-based models-TCAV

2023-12-26 17:14:50 1028

原创 Towards Robust Interpretability with Self-Explaining Neural Networks (SENN)

[可解释深度学习] Concept-based models-SENN

2023-12-26 17:10:26 935

原创 Uncertainty-guided dual-views for semi-supervised volumetric medical image segmentation

基于adversarial learning、multi-view learning以及uncertainty estimation实现semi-supervised volumetric medical image segmentation,文章发表于Nature Machine Intelligence 2023。

2023-12-26 16:59:21 1655 1

原创 Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps

[可解释深度学习] 事后可解释模型介绍-Saliency Map

2023-12-23 15:09:29 919

原创 Visualizing and Understanding Convolutional Networks

[可解释深度学习] 事后可解释模型介绍-DeconvNet

2023-12-23 04:29:08 995

原创 [Towards Interpretable Deep Learning] Concept-based Models

[可解释深度学习] Concept-based models论文阅读笔记

2023-12-18 20:49:48 953

原创 MIT 6.S191: Evidential Deep Learning学习笔记

MIT公开课——Evidential Deep Learning

2023-12-15 01:47:50 1922 5

原创 nnUNet_v2(Linux)

nnUNet的配置和使用

2023-12-09 16:54:47 2761 2

空空如也

空空如也

TA创建的收藏夹 TA关注的收藏夹

TA关注的人

提示
确定要删除当前文章?
取消 删除