paper reading- Feb 25 about optimization problem used in image

Two-Sided Sparse Learning with Augmented Lagrangian Method

这篇文章 针对解决图像相关的或是分类或者图像去噪,图像修复,只要是利用字典来处理问题都会遇到的sparse learning 的问题,本文引入了two-sided sparse learning的方法,在稀疏矩阵特征选择的时候,囊括更多的信息,除此之外,在解优化问题的时候,引入了ADMM方法,以及augmented Lagrangian Method.
https://www.researchgate.net/publication/331192514_Two-Sided_Sparse_Learning_with_Augmented_Lagrangian_Method

1. Sparse learning model

In this paper, this model is used in classification problem.

We use Φ ∈ R m × n \Phi \in R^{m\times n} ΦRm×n (m<<n) to denote the training data matrix consisting of n input samples whose classes is known in advance.
Given an arbitrary sample y ∈ \in R m R^m Rm, sparse learning model aims to find the sparse representation x of y under Φ \Phi Φ :
y = Φ x y = \Phi x y=Φx
where x ∈ R n x \in R^n xRn and the number of nonzero elements in x should not more than a specific threshold k.

For computational convenience, the optimization problem can be written as:
m i n x ∣ ∣ x ∣ ∣ 1 , s . t . y = Φ x min_x ||x||_1, s.t. y = \Phi x minxx1,s.t.y=Φx
the constraint on x can be achieved by using a L1 norm.

2. Two-Sided Sparse Learning

Instead of only considering the column-wise sparsity, this two-sided sparse learning model also takes the row-wise sparsity of features into account.

So the two-sided sparsity can reduce the reconstruction error and find several most representative features.

The proposed model is as following:

y = D Φ \Phi Φx
Notation:
y ∈ R m y \in R^m yRm an arbitrary sample vector with m features.
Φ ∈ R m × n \Phi \in R^{m\times n} ΦRm×n(m<<n) is the dictionary consisting of n training samples
D ∈ R m × m D\in R^{m\times m} DRm×m ensures the sparsity of features.
x ∈ R n x\in R^n xRn is the sparse representation of y.

x, y both are vectors.

In this work, we want to reconstruct x from y :

Then our model can be transformed to:
m i n 1 2 ∣ ∣ Y − D Φ X ∣ ∣ F 2 min \frac{1}{2}||Y - D\Phi X||_F^2 min21YDΦXF2 + λ 1 ∣ ∣ X ∣ ∣ 1 \lambda_1||X||_1 λ1X1+ λ 2 Ω ( D ) \lambda_2\Omega(D) λ2Ω(D)
λ 1 , λ 2 &gt; 0 \lambda_1, \lambda_2 &gt;0 λ1,λ2>0, Ω ( D ) \Omega(D) Ω(D)is a penalty term.
In this model, X and Y are matrix composed by x, y

In this penalty term, we consider L1 norm and F norm.

We use ADMM to solve this optimization problem, and we alternatively solve one variable by fixing the other at each iteration.


(2) Then, when fixing X, according to ADMM, Eq could be written as:
m i n 1 2 ∣ ∣ Y − D Φ X ∣ ∣ F 2 + λ 2 Ω ( Z ) min \frac{1}{2}||Y-D\Phi X||_F^2 +\lambda_2 \Omega(Z) min21YDΦXF2+λ2Ω(Z) s.t. D - Z =0

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值