论文阅读 [TPAMI-2022] Robust Bi-Stochastic Graph Regularized Matrix Factorization for Data Clustering

论文阅读 [TPAMI-2022] Robust Bi-Stochastic Graph Regularized Matrix Factorization for Data Clustering

论文搜索(studyai.com)

搜索论文: Robust Bi-Stochastic Graph Regularized Matrix Factorization for Data Clustering

搜索论文: http://www.studyai.com/search/whole-site/?q=Robust+Bi-Stochastic+Graph+Regularized+Matrix+Factorization+for+Data+Clustering

关键字(Keywords)

Robustness; Sparse matrices; Matrix decomposition; Loss measurement; Task analysis; Manifolds; Tools; Matrix factorization; bi-stochastic graph; data clustering; robustness

机器学习; 运筹与优化

损失函数; 聚类; 矩阵因子分解

摘要(Abstract)

Data clustering, which is to partition the given data into different groups, has attracted much attention.

数据聚类(dataclustering)是将给定的数据划分为不同的组,已经引起了人们的广泛关注。.

Recently various effective algorithms have been developed to tackle the task.

最近,各种有效的算法被开发出来解决这个问题。.

Among these methods, non-negative matrix factorization (NMF) has been demonstrated to be a powerful tool.

在这些方法中,非负矩阵分解(NMF)已被证明是一个强大的工具。.

However, there are still some problems.

然而,仍然存在一些问题。.

First, the standard NMF is sensitive to noises and outliers.

首先,标准NMF对噪声和异常值敏感。.

Although ℓ 2 , 1 \ell _{2,1} 2,1ℓ2,1 norm based NMF improves the robustness, it is still affected easily by large noises.

虽然 ℓ 2 , 1 \ell{2,1} 2,1ℓ基于2,1范数的NMF提高了鲁棒性,但仍然容易受到大噪声的影响。.

Second, for most graph regularized NMF, the performance highly depends on the initial similarity graph.

其次,对于大多数图正则化的NMF,性能在很大程度上取决于初始相似图。.

Third, many graph-based NMF models perform the graph construction and matrix factorization in two separated steps.

第三,许多基于图的NMF模型分两步执行图构造和矩阵分解。.

Thus the learned graph structure may not be optimal.

因此,学习的图形结构可能不是最优的。.

To overcome the above drawbacks, we propose a robust bi-stochastic graph regularized matrix factorization (RBSMF) framework for data clustering.

为了克服上述缺点,我们提出了一种鲁棒的双随机图正则化矩阵分解(RBSMF)数据聚类框架。.

Specifically, we present a general loss function, which is more robust than the commonly used L 2 L_2 L2L2 and L 1 L_1 L1L1 functions.

具体来说,我们提出了一个通用的损失函数,它比常用的 L 2 L_2 L2L2和 L 1 L_1 L1L1函数更健壮。.

Besides, instead of keeping the graph fixed, we learn an adaptive similarity graph.

此外,我们学习了一个自适应相似图,而不是保持图的固定。.

Furthermore, the graph updating and matrix factorization are processed simultaneously, which can make the learned graph more appropriate for clustering.

此外,图更新和矩阵分解同时进行,使学习的图更适合聚类。.

Extensive experiments have shown the proposed RBSMF outperforms other state-of-the-art methods…

大量实验表明,所提出的RBSMF优于其他最先进的方法。。.

作者(Authors)

[‘Qi Wang’, ‘Xiang He’, ‘Xu Jiang’, ‘Xuelong Li’]

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值