文章笔记:DCDicL: Deep Collaborative Dictionary Learning for Image Denoising

 

 

最近在读denoising相关的文章,发现一些文章没有被讲解过。所以留下自己的笔记帮助大家快速阅读。

 

 Y是N个样本的特征表示(就是抽了特征的隐层表达详见ChatGPT解释),使用参数X来用D里面的参数中心表达Y里面的样本

方法

对于每个输入图像,训练解出X和D,训练中的超参数使用HeadNet获得

3.2

重要的句子标黄 

 X通过FFT有闭式解

 

 

'''

Q: Introduce the paper:

ChatGPT:

DCDicL is an image denoising method based on deep learning and collaborative dictionary learning. The paper proposes a deep neural network architecture for image denoising, where the network is trained in a collaborative manner to learn a set of dictionaries from different layers of the network. The dictionaries are used to represent the noise-free and noisy images, and the denoising process is achieved by finding the sparse representations of the noisy images using these dictionaries. The performance of DCDicL is evaluated on several benchmark datasets, and the results show that it outperforms state-of-the-art image denoising methods in terms of both quantitative measures and visual quality.

Q: What is stretched image patch vectors?

ChatGPT:

In the paper "DCDicL: Deep Collaborative Dictionary Learning for Image Denoising," stretched image patch vectors refer to the feature representation of an image patch that is obtained by concatenating the vectorized form of the patch with its flipping and rotation variations. This is done to increase the number of training samples and to better capture the geometric transformations present in the image. The stretched image patch vectors are then used as inputs to the deep collaborative dictionary learning algorithm, which learns a shared dictionary to represent the features of noisy and clean image patches. The learned dictionary is then used to denoise the input images.

Q: What is prior in "prior on coefficient X"?

A: In the context of the paper "DCDicL: Deep Collaborative Dictionary Learning for Image Denoising", "prior on coefficient X" refers to a constraint or assumption made about the coefficients X that are used to represent the image patch vector. The coefficients X are obtained by applying a dictionary learning algorithm on the stretched image patch vectors. The prior on coefficient X could be a statistical prior, such as a Gaussian distribution, or it could be a regularization term in the optimization problem that is used to reconstruct the original image. The purpose of the prior is to encourage certain properties in the coefficients X that are beneficial for denoising the image, such as sparsity or smoothness.

Q: What means prior?

A: In statistics and machine learning, a prior is a probability distribution over the possible values of a parameter or variables. It represents prior knowledge or beliefs about the value of a parameter before taking into account new data. In Bayesian inference, the prior is combined with the likelihood to produce the posterior, which represents the updated belief after taking into account the new data. In the context of deep learning, priors may be used to regularize models, for example, by adding constraints to the parameters during training to encourage certain properties or behaviors.

'''

可参考文献:

CVPR 2021 论文大盘点-图像去噪篇_oliverH的技术博客_51CTO博客

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值