Learning the parts of objects by non-negative matrix factorization (Letters to Nature)

Daniel D. Lee*& H. Sebastian Seung*†
*Bell Laboratories, Lucent Technologies, Murray Hill, New Jersey 07974, USA
†Department of Brain and Cognitive Sciences, Massachusetts Institute of
Technology, Cambridge, Massachusetts 02139, USA

 

The differences between PCA, VQ and NMF arise from different constraints imposed on the matrix factorsWandH.

In VQ, each column ofHis constrained to be a unary vector, with one element equal to unity and the other elements equal to zero. In other words, every face (column ofV) is approximated by a single basis image (column ofW) in the factorizationV≈WH. Such a unary encod-ing for a particular face is shown next to the VQ basis in Fig. 1. This unary representation forces VQ to learn basis images that are prototypical faces.

PCA constrains the columns ofWto be orthonormal and the rows ofHto be orthogonal to each other. This relaxes the unary constraint of VQ, allowing a distributed representation in which each face is approximated by a linear combination of all the basis images, or eigenfaces. A distributed encoding of a particular face is shown next to the eigenfaces in Fig. 1. Although eigenfaces have a statistical interpretation as the directions of largest variance, many of them do not have an obvious visual interpretation. This is because PCA allows the entries ofWandHto be of arbitrary sign. As the eigenfaces are used in linear combinations that generally involve complex cancellations between positive and negative numbers, many individual eigenfaces lack intuitive meaning.

NMF does not allow negative entries in the matrix factors W and H. Unlike the unary constraint of VQ, these non-negativity con-straints permit the combination of multiple basis images to repre-sent a face. But only additive combinations are allowed, because the non-zero elements ofWandHare all positive. In contrast to PCA, no subtractions can occur. For these reasons, the non-negativity constraints are compatible with the intuitive notion of combining parts to form a whole, which is how NMF learns a parts-based representation.

 

转载于:https://www.cnblogs.com/xwolfs/archive/2013/05/27/3101029.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值