原 RPCA以及LRR

转载自:https://blog.csdn.net/tiandijun/article/details/44917237

RPCA

关于RPCA的博客:

原文:http://blog.csdn.net/abcjennifer/article/details/8572994

译文:http://blog.csdn.net/u010545732/article/details/19066725

数据降维的总结:数据降维(RPCA,LRR.LE等)
http://download.csdn.net/detail/tiandijun/8569653

低秩的子空间恢复:http://download.csdn.net/detail/tiandijun/8569675

LRR

Tutorials

  1. Low-Rank Matrix Recovery: From Theory to Imaging Applications
    John Wright, Zhouchen Lin, and Yi Ma. Presented at International Conference on Image and Graphics (ICIG), August 2011. 
  2. Low-Rank Matrix Recovery
    John Wright, Zhouchen Lin, and Yi Ma. Presented at IEEE International Conference on Image Processing (ICIP), September 2010.


Theory

  1. Robust Principal Component Analysis?
    Emmanuel Candès, Xiaodong Li, Yi Ma, and John Wright. Journal of the ACM, volume 58, no. 3, May 2011. 
  2. Dense Error Correction via L1-Minimization
    John Wright, and Yi Ma. IEEE Transactions on Information Theory, volume 56, no. 7, July 2010. 
  3. Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Matrices via Convex Optimization
    John Wright, Arvind Ganesh, Shankar Rao, Yigang Peng, and Yi Ma. In Proceedings of Neural Information Processing Systems (NIPS), December 2009. 
  4. Stable Principal Component Pursuit
    Zihan Zhou, Xiaodong Li, John Wright, Emmanuel Candès, and Yi Ma. In Proceedings of IEEE International Symposium on Information Theory (ISIT), June 2010. 
  5. Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit
    Arvind Ganesh, John Wright, Xiaodong Li, Emmanuel Candès, and Yi Ma. In Proceedings of IEEE International Symposium on Information Theory (ISIT), June 2010. 
  6. Principal Component Pursuit with Reduced Linear Measurements
    Arvind Ganesh, Kerui Min, John Wright, and Yi Ma. submitted to International Symposium on Information Theory, 2012. 
  7. Compressive Principal Component Pursuit
    John Wright, Arvind Ganesh, Kerui Min, and Yi Ma. submitted to International Symposium on Information Theory, 2012.
代码

Robust PCA

We provide MATLAB packages to solve the RPCA optimization problem by different methods. All of our code below is Copyright 2009 Perception and Decision Lab, University of Illinois at Urbana-Champaign, and Microsoft Research Asia, Beijing. We also provide links to some publicly available packages to solve the RPCA problem. Please contact  John Wright  or  Arvind Ganesh  if you have any questions or comments. If you are looking for the code to our RASL and TILT algorithms, please refer to the  applications  section.

  1. Augmented Lagrange Multiplier (ALM) Method [exact ALM - MATLAB zip] [inexact ALM - MATLAB zip]
    Usage - The most basic form of the exact ALM function is [A, E] = exact_alm_rpca(D, λ), and that of the inexact ALM function is [A, E] = inexact_alm_rpca(D, λ), where D is a real matrix and λ is a positive real number. We solve the RPCA problem using the method of augmented Lagrange multipliers. The method converges Q-linearly to the optimal solution. The exact ALM algorithm is simple to implement, each iteration involves computing a partial SVD of a matrix the size of D, and converges to the true solution in a small number of iterations. The algorithm can be further speeded up by using a fast continuation technique, thereby yielding the inexact ALM algorithm. 
    Reference - The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices, Z. Lin, M. Chen, L. Wu, and Y. Ma (UIUC Technical Report UILU-ENG-09-2215, November 2009). 
  2. Accelerated Proximal Gradient [full SVD version - MATLAB zip] [partial SVD version - MATLAB zip]
    Usage - The most basic form of the full SVD version of the function is [A, E] = proximal_gradient_rpca(D, λ), where D is a real matrix and λ is a positive real number. We consider a slightly different version of the original RPCA problem by relaxing the equality constraint. The algorithm is simple to implement, each iteration involves computing the SVD of a matrix the size of D, and converges to the true solution in a small number of iterations. The algorithm can be further speeded up by computing partial SVDs at each iteration. The most basic form of the partial SVD version of the function is [A, E] = partial_proximal_gradient_rpca(D, λ), where D is a real matrix and λ is a positive real number. 
    Reference - Fast Convex Optimization Algorithms for Exact Recovery of a Corrupted Low-Rank Matrix, Z. Lin, A. Ganesh, J. Wright, L. Wu, M. Chen, and Y. Ma (UIUC Technical Report UILU-ENG-09-2214, August 2009). 
  3. Dual Method [MATLAB zip]
    Usage - The most basic form of the function is [A, E] = dual_rpca(D, λ), where D is a real matrix and λ is a positive real number. We solve the convex dual of the RPCA problem, and retrieve the low-rank and sparse error matrices from the dual optimal solution. The algorithm computes only a partial SVD in each iteration and hence, scales well with the size of the matrix D.
    Reference - Fast Convex Optimization Algorithms for Exact Recovery of a Corrupted Low-Rank Matrix, Z. Lin, A. Ganesh, J. Wright, L. Wu, M. Chen, and Y. Ma (UIUC Technical Report UILU-ENG-09-2214, August 2009). 
  4. Singular Value Thresholding [MATLAB zip]
    Usage - The most basic form of the function is [A, E] = singular_value_rpca(D, λ), where D is a real matrix and λ is a positive real number. Here again, we solve a relaxation of the original RPCA problem, albeit different from the one solved by the Accelerated Proximal Gradient (APG) method. The algorithm is extremely simple to implement, and the computational complexity of each iteration is about the same as that of the APG method. However, the number of iterations to convergence is typically quite large. 
    Reference - A Singular Value Thresholding Algorithm for Matrix Completion,
    J. -F. Cai, E. J. Candès, and Z. Shen (2008). 
  5. Alternating Direction Method [MATLAB zip
    Reference - Sparse and Low-Rank Matrix Decomposition via Alternating Direction Methods, X. Yuan, and J. Yang (2009).

Matrix Completion

We provide below links to publicly available code and references to solve the matrix completion problem faster than conventional algorithms.
  1. Augmented Lagrange Multiplier (ALM) Method [inexact ALM - MATLAB zip]
    Usage - The most basic form of the inexact ALM function is A = inexact_alm_mc(D), where D is the incomplete matrix defined in the MATLAB sparse matrix format and the output A is a structure with two components - A.U and A.V (the left and right singular vectors scaled respectively by the square root of the corresponding non-zero singular values). Please refer to the file test_alm_mc.m for details on defining Dappropriately. The algorithm is identical to the inexact ALM method described above to solve the RPCA prblem, and enjoys the same convergence properties. 
    Reference - The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices, Z. Lin, M. Chen, L. Wu, and Y. Ma (UIUC Technical Report UILU-ENG-09-2215, November 2009). 
  2. Singular Value Thresholding
    Reference A Singular Value Thresholding Algorithm for Matrix Completion, J. -F. Cai, E. J. Candès, and Z. Shen (2008). 
  3. OptSpace 
    Reference - Matrix Completion from a Few Entries, R.H. Keshavan, A. Montanari, and S. Oh (2009). 
  4. Accelerated Proximal Gradient
    Reference - An Accelerated Proximal Gradient Algorithm for Nuclear Norm Regularized Least Squares Problems, K. -C. Toh, and S. Yun (2009). 
  5. Subspace Evolution and Transfer (SET) [MATLAB zip]
    Reference - SET: An Algorithm for Consistent Matrix Completion, W. Dai, and O. Milenkovic (2009). 
  6. GROUSE: Grassmann Rank-One Update Subspace Estimation
    Reference - Online Identification and Tracking of Subspaces from Highly Incomplete Information, L. Balzano, R. Nowak, and B. Recht (2010).

Comparison of Algorithms

We provide a simple comparison of the speed and accuracy of various RPCA algorithms. Each algorithm was tested on a rank-20 matrix of size 400 x 400 with 5% of its entries corrupted by large errors. The low-rank matrix A is generated as the product LRT, where L and R are 400 x 20 matrices whose entries are i.i.d. according to the standard Gaussian distribution. The error matrix E is a sparse matrix whose support is chosen uniformly at random and whose non-zero entries are independent and uniformly distributed in the range [-50,50]. The value of λ was fixed as 0.05. The accuracy of the solution is indicated by the rank of the estimated low-rank matrix A and its relative error (in Frobenius norm) with respect to the true solution. All simulations were carried out on a Macbook Pro with a 2.8 GHz processor, two cores, and 4 GB memory.

Please note that the following tables represent typical performance, using default parameters, on random matrices drawn according to the distribution specified earlier. The performance could vary when dealing with matrices drawn from other distributions or with real data. 

Robust PCA Algorithm Comparison
AlgorithmRank of estimateRelative error in estimate of ATime (s)
Singular Value Thresholding203.4 x 10-4877
Accelerated Proximal Gradient202.0 x 10-543
Accelerated Proximal Gradient
(with partial SVDs)
201.8 x 10-58
Dual Method201.6 x 10-5177
Exact ALM207.6 x 10-84
Inexact ALM204.3 x 10-82
Alternating Direction Methods202.2 x 10-55

note:If you would like to list your code related to this topic on this website, please contact the webmaster Kerui Min
  • 1
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值