【论文翻译】Nonlinear Dimensionality Reduction by Locally Linear Embedding

论文题目:Nonlinear Dimensionality Reduction by Locally Linear Embedding

论文来源:Nonlinear Dimensionality Reduction by Locally Linear Embedding

翻译人:BDML@CQUT实验室

Nonlinear Dimensionality Reduction by Locally Linear Embedding

基于局部线性嵌入的非线性降维方法

Sam T. Roweis1 and Lawrence K. Saul2

Abstract

Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text.

摘要

许多科学领域的研究都需要对数据进行分析和可视化。人们从大量多元数据的需求中发现了降维的根本问题:如何发现高维数据的紧凑表示。本文介绍了局部线性嵌入(LLE),一种无监督的学习方法,用于计算高维输入的低维、邻域保护嵌入。与局部降维的聚类方法不同,LLE将其输入映射到一个低维的全局坐标系中,并且其优化不影响局部极小值。利用线性重构的局部对称性,LLE能够学习非线性流形的整体结构,如由人脸图像或文本文档生成的流形。

正文

How do we judge similarity? Our mental representations of the world are formed by processing large numbers of sensory inputs—including, for example, the pixel intensities of images, the power spectra of sounds, and the joint angles of articulated bodies. While complex stimuli of this form can be represented by points in a high-dimensional vector space, they typically have a much more compact description. Coherent structure in the world leads to strong correlations between inputs (such as between neighboring pixels in images), generating observations that lie on or close to a smooth low-dimensional manifold. To compare and classify such observations—in effect, to reason about the world—depends crucially on modeling the nonlinear geometry of these low-dimensional manifolds.

如何判断相似性呢?我们对世界的心理表征是通过处理大量的感觉输入而形成的,例如图像的像素强度、声音的功率谱以及关节的关节角度。虽然这种形式的复杂刺激可以由高维向量空间中的点表示,但它们通常具有更为紧凑的描述。世界上的连贯结构导致输入之间(例如图像中相邻像素之间)的强相关性,从而生成位于平滑低维流形上或附近的观测值。要对这些观察结果进行比较和分类(实际上是为了推理世界),关键在于建模这些低维流形的非线性几何。

Scientists interested in exploratory analysis or visualization of multivariate data (1) face a similar problem in dimensionality reduction. The problem, as illustrated in Fig. 1, involves mapping high-dimensional inputs into a lowdimensional “description” space with as many coordinates as observed modes of variability. Previous approaches to this problem, based on multidimensional scaling (MDS) (2), have computed embeddings that attempt to preserve pairwise distances [or generalized disparities (3)] between data points; these distances are measured along straight lines or, in more sophisticated usages of MDS such as Isomap (4), along shortest paths confined to the manifold of observed inputs. Here, we take a different approach, called locally linear embedding (LLE), that eliminates the need to estimate pairwise distances between widely separated data points. Unlike previous methods, LLE recovers global nonlinear structure from locally linear fits.

对多变量数据进行探索性分析或可视化的科学家(1)在降维方面面临着类似的问题。如图1所示,该问题涉及将高维输入映射到低维“描述”空间,该空间的坐标与观察到的可变性模式一样多。基于多维缩放(MDS)(2)的解决此问题的先前方法,已经计算出嵌入,试图保留数据点之间的成对距离(或广义视差(3))。这些距离是沿着直线测量的,或者在MDS的更复杂用法(例如Isomap(4))中,沿着限制在观察到的输入流形上的最短路径测量。本文我们采用了另一种称为局部线性嵌入(LLE)的方法,该方法无需估计广泛分离的数据点之间的成对距离。与以前的方法不同,LLE从局部线性拟合中恢复全局非线性结构。

 

The LLE algorithm, summarized in Fig. 2, is based on simple geometric intuitions. Suppose the data consist of N real-valued vectors X , each of dimensionality D, sampled from some underlying manifold. Provided there is sufficient data (such that the manifold is well-sampled), we expect each data point and its neighbors to lie on or close to a locally linear patch of the manifold. We characterize the local geometry of these patches by linear coefficients that reconstruct each data point from its neighbors. Reconstruction errors are measured by the cost function

 

which adds up the squared distances between all the data points and their reconstructions. The weights summarize the contribution of the jth data point to the ith reconstruction. To compute the weights , we minimize the cost function subject to two constraints: first, that each data point is reconstructed only from its neighbors(5), enforcing if dose not belong to the set of neighbors of second, that the rows of the weight matrix sum to one: . The optimal weights subject to these constraints (6) are found by solving a least-squares problem (7).

LLE算法基于简单的几何直觉,如图2所示。假设数据由N个实值向量X组成,每个向量维D是从某个基础流形采样的。如果有足够的数据(以便对流形进行充分采样),我们希望每个数据点及其邻居位于流形的局部线性面片上或附近。我们通过线性系数来表征这些面片的局部几何形状,这些系数从每个数据点的相邻点重建它们。重建误差由成本函数衡量

 

这将所有数据点及其重构之间的平方距离相加。权重总结了第j个数据点对第i个重构的贡献。为了计算权重,我们将成本函数最小化,但要遵循两个约束:首先,每个数据点仅从其邻居(5)重构,如果不使用,则强制 属于秒的邻居集合,则权重矩阵的行之和为。通过解决最小二乘问题(7)可以找到受这些约束(6)约束的最佳权重。

 

The constrained weights that minimize these reconstruction errors obey an important symmetry: for any particular data point, they are invariant to rotations, rescalings, and translations of that data point and its neighbors. By symmetry, it follows that the reconstruction weights characterize intrinsic geometric properties of each neighborhood, as opposed to properties that depend on a particular frame of reference (8). Note that the invariance to translations is specifically enforced by the sum-to-one constraint on the rows of the weight matrix.

使这些重构误差最小的约束权重遵循重要的对称性:对于任何特定数据点,它们对于该数据点及其邻域的旋转,缩放和平移都是不变的。通过对称性可以得出,重建权重表征了每个邻域的固有几何特性,这与依赖于特定参考框架的特性相反(8)。需要注意的是,转换的不变性由权重矩阵的行上的合一约束专门决定。

Suppose the data lie on or near a smooth nonlinear manifold of lower dimensionality d << D. To a good approximation then, there exists a linear mapping—consisting of a translation, rotation, and rescaling—that maps the high-dimensional coordinates of each neighborhood to global internal coordinates on the manifold. By design, the reconstruction weights reflect intrinsic geometric properties of the data that are invariant to exactly such transformations. We therefore expect their characterization of local geometry in the original data space to be equally valid for local patches on the manifold. In particular, the same weights that reconstruct the ith data point in D dimensions should also reconstruct its embedded manifold coordinates in d dimensions.

假设数据位于较低维数d << D的光滑非线性流形上或附近。要得到一个很好的近似值,就需要存在一个线性映射——由平移、旋转和重新缩放组成——它将各邻域的高维坐标映射到流形上的全局内部坐标。通过设计,重建权重反映了数据的固有几何特性,这些特性对于确切的此类变换而言是不变的。因此我们希望它们对原始数据空间中的局部几何图形的表征对于歧管上的局部面片同样有效。特别是在D维中重建第i个数据点的权重也应在d维中重建其嵌入的流形坐标。

LLE constructs a neighborhood-preserving mapping based on the above idea. In the final step of the algorithm, each high-dimensional observation X is mapped to a low-dimensional vector Y representing global internal coordinates on the manifold. This is done by choosing d-dimensional coordinates Y to minimize the embedding cost function

 

This cost function, like the previous one, is based on locally linear reconstruction errors, but here we fix the weights while optiWi mizing the coordinates Y . The embedding cost in Eq. 2 defines a quadratic form in the vectors Y . Subject to constraints that make the problem well-posed, it can be minimized by solving a sparse N 3 N eigenvalue problem (9), whose bottom d nonzero eigenvectors provide an ordered set of orthogonal coordinates centered on the origin.

LLE基于以上思想构造了一个保留邻域的映射。在算法的最后一步中,每个高维观测值X映射到代表流形上全局内部坐标的低维向量Y。这是通过选择d维坐标Y来最小化嵌入成本函数来完成的

 

与上一个函数一样,此成本函数基于局部线性重建误差,但是在此,我们在优化坐标Y的同时固定权重。式中的嵌入成本。图2在向量Y中定义了二次形式。受到使问题解决的约束的约束,可以通过解决稀疏的N x N特征值问题(9)将其最小化,该问题的底部d个非零特征向量提供了以原点为中心的正交坐标的有序集合。

Implementation of the algorithm is straightforward. In our experiments, data points were reconstructed from their K nearest neighbors, as measured by Euclidean distance or normalized dot products. For such implementations of LLE, the algorithm has only one free parameter: the number of neighbors, K. Once neighbors are chosen, the optimal weights and coordinates Y are computed by standard methods in linear algebra. The algorithm involves a single pass through the three steps in Fig. 2 and finds global minima of the reconstruction and embedding costs in Eqs. 1 and 2. In addition to the example in Fig. 1, for which the true manifold structure was known(10), we also applied LLE to images of faces(11) and vectors of word-document counts(12). Two-dimensional embeddings of faces and words are shown in Figs. 3 and 4. Note how the coordinates of these embedding spaces are related to meaningful attributes, such as the pose and expression of human faces and the semantic associations of words.

该算法的实现很简单。在我们的实验中,通过欧几里得距离或归一化的点积测量,从它们的K个最近邻居重建了数据点。对于LLE的此类实现,该算法仅具有一个自由参数:邻居数K。一旦选择了邻居,则通过标准方法在线性代数中计算最佳权重和坐标Y。该算法涉及图2中的三个步骤的一次遍历,并在等式1、2中找到了重建和嵌入成本的全局最小值。除了图1中的示例,对于该示例来说,真正的流形结构是已知的(10),我们还将LLE应用于人脸图像(11)和单词文档计数向量(12)。面部和单词的二维嵌入在图1和2中示出,如图3、4。要注意这些嵌入空间的坐标如何与有意义的属性关联,例如人脸的姿势和表情以及单词的语义关联。

 

Many popular learning algorithms for nonlinear dimensionality reduction do not share the favorable properties of LLE. Iterative hill-climbing methods for autoencoder neural networks (13, 14), self-organizing maps (15), and latent variable models (16) do not have the same guarantees of global optimality or convergence; they also tend to involve many more free parameters, such as learning rates, convergence criteria, and architectural specifications. Finally, whereas other nonlinear methods rely on deterministic annealing schemes (17) to avoid local minima, the optimizations of LLE are especially tractable.

许多流行的用于非线性降维的学习算法并不具有LLE的良好特性。自动编码器神经网络(13、14),自组织映射(15)和潜变量模型(16)的迭代爬山方法不能保证全局最优性或收敛性。它们还倾向于包含更多的免费参数,例如学习率,收敛标准和体系结构规范。最后尽管其他非线性方法依靠确定性退火方案(17)来避免局部极小值,但LLE的优化特别容易处理。

LLE scales well with the intrinsic manifold dimensionality, d, and does not require a discretized gridding of the embedding space. As more dimensions are added to the embedding space, the existing ones do not change, so that LLE does not have to be rerun to compute higher dimensional embeddings. Unlike methods such as principal curves and surfaces (18) or additive component models (19), LLE is not limited in practice to manifolds of extremely low dimensionality or codimensionality. Also, the intrinsic value of d can itself be estimated by analyzing a reciprocal cost function, in which reconstruction weights derived from the embedding vectors Y are applied to the data points X.

LLE与固有流形维数d很好地缩放且不需要嵌入空间的离散网格化。随着更多维度添加到嵌入空间中,现有维度不会更改,因此不必重新运行LLE即可计算更高维度的嵌入。与主曲线和曲面(18)或附加组件模型(19)之类的方法不同,LLE在实践中不受限于维数或共维数极低的歧管。同样d的内在值本身可以通过分析一个倒数成本函数来估算,其中将从嵌入矢量Y得出的重构权重应用于数据点X。

LLE illustrates a general principle of manifold learning, elucidated by Martinetz and Schulten (20) and Tenenbaum (4), that overlapping local neighborhoods—collectively analyzed—can provide information about global geometry. Many virtues of LLE are shared by Tenenbaum’s algorithm, Isomap, which has been successfully applied to similar problems in nonlinear dimensionality reduction. Isomap’s embeddings, however, are optimized to preserve geodesic distances between general pairs of data points, which can only be estimated by computing shortest paths through large sublattices of data. LLE takes a different approach, analyzing local symmetries, linear coefficients, and reconstruction errors instead of global constraints, pairwise distances, and stress functions. It thus avoids the need to solve large dynamic programming problems, and it also tends to accumulate very sparse matrices, whose structure can be exploited for savings in time and space.

LLE阐明了流形学习的一般原理,Martinetz和Schulten(20)和Tenenbaum(4)阐明了这一点,即重叠的局部邻域(共同分析)可以提供有关全局几何的信息。Tenenbaum的算法Isomap具有LLE的许多优点,该算法已成功应用于非线性降维中的类似问题。但是优化了Isomap的嵌入以保留通用数据点对之间的测地距离,这只能通过计算通过大数据子格的最短路径来估算。LLE采用另一种方法——分析局部对称性、线性系数和重构误差,而不是全局约束、成对距离和应力函数。因此它避免了解决大型动态编程问题的需要,并且还倾向于累积非常稀疏的矩阵,可以利用其结构来节省时间和空间。

LLE is likely to be even more useful in combination with other methods in data analysis and statistical learning. For example, a parametric mapping between the observation and embedding spaces could be learned by supervised neural networks (21) whose target values are generated by LLE. LLE can also be generalized to harder settings, such as the case of disjoint data manifolds (22), and specialized to simpler ones, such as the case of time-ordered observations (23).

在数据分析和统计学习中,与其他方法结合使用时,LEE可能会更加有用。 例如观察和嵌入空间之间的参数映射可以通过监督神经网络(21)来学习,其目标值由LLE生成。 LLE还可以推广到更难的设置,例如不连续的数据流形(22),并推广到简单的设置,例如按时间顺序的观测(23)。

Perhaps the greatest potential lies in applying LLE to diverse problems beyond those considered here. Given the broad appeal of traditional methods, such as PCA and MDS, the algorithm should find widespread use in many areas of science.

也许最大的潜力在于将LLE应用于此处未考虑的各种问题。 鉴于PCA和MDS等传统方法的广泛吸引力,该算法应在许多科学领域中得到广泛使用。

 

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值