最近在读denoising相关的文章,发现一些文章没有被讲解过。所以留下自己的笔记帮助大家快速阅读。
Y是N个样本的特征表示(就是抽了特征的隐层表达详见ChatGPT解释),使用参数X来用D里面的参数中心表达Y里面的样本
方法
对于每个输入图像,训练解出X和D,训练中的超参数使用HeadNet获得
3.2
重要的句子标黄
X通过FFT有闭式解
'''
Q: Introduce the paper:
ChatGPT:
DCDicL is an image denoising method based on deep learning and collaborative dictionary learning. The paper proposes a deep neural network architecture for image denoising, where the network is trained in a collaborative manner to learn a set of dictionaries from different layers of the network. The dictionaries are used to represent the noise-free and noisy images, and the denoising process is achieved by finding the sparse representations of the noisy images using these dictionaries. The performance of DCDicL is evaluated on several benchmark datasets, and the results show that it outperforms state-of-the-art image denoising methods in terms of both quantitative measures and visual quality.
Q: What is stretched image patch vectors?
ChatGPT:
In the paper "DCDicL: Deep Collaborative Dictionary Learning for Image Denoising," stretched image patch vectors refer to the feature representation of an image patch that is obtained by concatenating the vectorized form of the patch with its flipping and rotation variations. This is done to increase the number of training samples and to better capture the geometric transformations present in the image. The stretched image patch vectors are then used as inputs to the deep collaborative dictionary learning algorithm, which learns a shared dictionary to represent the features of noisy and clean image patches. The learned dictionary is then used to denoise the input images.
Q: What is prior in "prior on coefficient X"?
A: In the context of the paper "DCDicL: Deep Collaborative Dictionary Learning for Image Denoising", "prior on coefficient X" refers to a constraint or assumption made about the coefficients X that are used to represent the image patch vector. The coefficients X are obtained by applying a dictionary learning algorithm on the stretched image patch vectors. The prior on coefficient X could be a statistical prior, such as a Gaussian distribution, or it could be a regularization term in the optimization problem that is used to reconstruct the original image. The purpose of the prior is to encourage certain properties in the coefficients X that are beneficial for denoising the image, such as sparsity or smoothness.
Q: What means prior?
A: In statistics and machine learning, a prior is a probability distribution over the possible values of a parameter or variables. It represents prior knowledge or beliefs about the value of a parameter before taking into account new data. In Bayesian inference, the prior is combined with the likelihood to produce the posterior, which represents the updated belief after taking into account the new data. In the context of deep learning, priors may be used to regularize models, for example, by adding constraints to the parameters during training to encourage certain properties or behaviors.
'''
可参考文献: