~~~2024-04~~~
NANCYGOODENOUGH
这个作者很懒,什么都没留下…
展开
-
2024-04-22-2023-CIBM-Multi-Stage Hybrid Attention Network for MRI reconstruction and SR
The proposed MHAN has some limitations. First, in practice, this multi-task network may need to deal with different masks and scaling factors than in the pre-training. Due to the limited memory of MR Scanners, it is not cost-effective to train a model for原创 2024-04-22 23:55:40 · 323 阅读 · 0 评论 -
2024-04-22-医学图像IXI数据集处理
摘自:2023-CIBM-Multi-Stage Hybrid Attention Network for MRI reconstruction and SR。原创 2024-04-22 23:46:58 · 641 阅读 · 0 评论 -
2024-04-22-Residual Dense Network和Residual Feature Aggregation Network
The motivation of RDB and RFA is that the hierarchical feature representations should be fully used to learn local patterns.A RFA module contains several residual modules and mainly aggregates features from the residual branches. In contrast, the RDB colle原创 2024-04-22 20:44:03 · 418 阅读 · 0 评论 -
2024-4-21-应用于超分辨率任务的损失函数的区别是什么?
导致整体损失迅速增加。但当差异很大时,其平方会显著增加,从而在整体损失中占据更大的比重。这种对大误差更敏感的特性意味着,如果数据集中存在异常值(即那些与其他数据点显著不同的值),它们会对损失函数的值产生不成比例的影响。因此,在存在异常值的数据集上,使用MSE损失可能不是最佳选择,因为模型可能会过于关注那些异常值,而不是学习到更普遍的、对整个数据集来说更有用的模式。MSE损失(均方误差损失)对大的误差给予更大的惩罚这一特性的含义是:当预测值与真实值之间的差异较大时,这种差异在损失函数中的贡献会以。原创 2024-04-21 17:45:09 · 214 阅读 · 0 评论