【论文翻译】:Nonlinear Dimensionality Reduction by Locally Linear Embedding

【论文题目】:Nonlinear Dimensionality Reduction by Locally Linear Embedding
【论文来源】:Nonlinear Dimensionality Reduction by Locally Linear Embedding
【翻译人】:BDML@CQUT实验室

Nonlinear Dimensionality Reduction by Locally Linear Embedding

基于局部线性嵌入的非线性降维方法

Sam T. Roweis1 and Lawrence K. Saul2

Abstract

Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text.

摘要

科学的许多领域依赖于探索性的数据分析和可视化。分析大量多元数据的需要提出了降维的基本问题:如何发现高维数据的紧凑表示。在这里,我们介绍了局部线性嵌入(LLE),这是一种无监督的学习算法,可以计算高维输入的低维、保持邻域的嵌入。与用于局部降维的聚类方法不同,LLE将其输入映射到一个低维的全局坐标系中,并且其优化不涉及局部极小。通过利用线性重构的局部对称性,LLE能够学习非线性流形的全局结构,如由人脸图像或文本文档生成的流形。

正文

How do we judge similarity? Our mental representations of the world are formed by processing large numbers of sensory inputs—including, for example, the pixel intensities of images, the power spectra of sounds, and the joint angles of articulated bodies. While complex stimuli of this form can be represented by points in a high-dimensional vector space, they typically have a much more compact description. Coherent structure in the world leads to strong correlations between inputs (such as between neighboring pixels in images), generating observations that lie on or close to a smooth low-dimensional manifold. To compare and classify such observations—in effect, to reason about the world— depends crucially on modeling the nonlinear geometry of these low-dimensional manifolds.

我们如何判断相似性?我们对世界的心理表征是通过处理大量的感官输入而形成的,例如,图像的像素强度、声音的功率谱和关节体的关节角。虽然这种形式的复杂刺激可以用高维向量空间中的点来表示,但它们通常具有更紧凑的描述。世界上的相干结构导致输入之间的强相关性(例如图像中相邻像素之间的相关性),从而生成位于或接近平滑低维流形上的观测值。有效地比较和分类这些观察结果,对世界进行推理,关键在于对这些低维流形的非线性几何建模。

Scientists interested in exploratory analysis or visualization of multivariate data (1) face a similar problem in dimensionality reduction. The problem, as illustrated in Fig. 1, involves mapping high-dimensional inputs into a lowdimensional “description” space with as many coordinates as observed modes of variability.Previous approaches to this problem, based on multidimensional scaling (MDS) (2), have computed embeddings that attempt to preserve pairwise distances [or generalized disparities (3)] between data points; these distances are measured along straight lines or, in more sophisticated usages of MDS such as Isomap (4),along shortest paths confined to the manifold of observed inputs. Here, we take a different approach, called locally linear embedding (LLE), that eliminates the need to estimate pairwise distances between widely separated data points. Unlike previous methods, LLE recovers global nonlinear structure from locally linear fits.

对多元数据的探索性分析或可视化感兴趣的科学家(1)在降维方面也面临类似的问题。这个问题,如图1所示,涉及到将高维输入映射到低维的“描述”空间,该空间具有与观察到的模式一样多的坐标可变性。上一个基于多维标度(MDS)(2)的解决方法,具有试图保持数据点之间成对距离[或广义差异(3)]的计算嵌入;这些距离是沿着直线测量的,或者在更复杂的MDS用法中,如Isomap(4),沿着限制在观测输入流形中的最短路径测量。在这里,我们采用一种不同的方法,称为局部线性嵌入(LLE),它消除了估计相距很远的数据点之间成对距离的需要。与以往的方法不同,LLE方法从局部线性拟合中恢复全局非线性结构。

在这里插入图片描述

Fig. 1. The problem of nonlinear dimensionality reduction, as illustrated (10) for three-dimensional data (B) sampled from a two-dimensional manifold (A). An unsupervised learning algorithm must discover the global internal coordinates of the manifold without signals that explicitly indicate how the data should be embedded in two dimensions. The color coding illustrates the neighborhoodpreserving mapping discovered by LLE; black outlines in (B) and © show the neighborhood of a single point. Unlike LLE, projections of the data by principal component analysis (PCA) (28) or classical MDS (2) map faraway data points to nearby points in the plane, failing to identify the underlying structure of the manifold. Note that mixture models for local dimensionality reduction (29), which cluster the data and perform PCA within each cluster, do not address the problem considered here: namely, how to map high-dimensional data into a single global coordinate system of lower dimensionality.

图1。非线性降维问题,如图(10)所示,从二维流形(a)采样的三维数据(B)。一个无监督的学习算法必须发现流形的全局内部坐标,而不需要明确指示如何将数据嵌入二维中的信号。颜色编码说明了LLE发现的保持邻域的映射,(B)和(C)中的黑色轮廓表示单个点的邻域。与LLE不同,主成分分析(PCA)(28)或经典MDS(2)的数据投影将遥远的数据点映射到平面上的附近点,无法识别流形的底层结构。请注意,用于局部降维的混合模型(29)对数据进行聚类并在每个簇内执行PCA,并没有解决这里考虑的问题:即如何将高维数据映射到低维的单个全局坐标系中。

The LLE algorithm, summarized in Fig. 2, is based on simple geometric intuitions. Suppose the data consist of N real-valued vectors Xi, each of dimensionality D, sampled from some underlying manifold. Provided there is sufficient data (such that the manifold is well-sampled), we expect each data point and its neighbors to lie on or close to a locally linear patch of the manifold. We characterize the local geometry of these patches by linear coefficients that reconstruct each data point from its neighbors. Reconstruction errors are measured by the cost function
在这里插入图片描述
which adds up the squared distances between all the data points and their reconstructions. The weights Wij summarize the contribution of the jth data point to the ith reconstruction. To compute the weights Wij, we minimize the cost function subject to two constraints: first, that each data point Xi is reconstructed only from its neighbors (5), enforcing Wij=0 if Xj does not belong to the set of neighbors of Xi; second, that the rows of the weight matrix sum to one: ∑jWij=1. The optimal weights Wij subject to these constraints (6) are found by solving a least-squares problem (7).

LLE算法,如图2所示,基于简单的几何直觉。假设数据由N个实值向量Xi组成,每个向量的维数为D,从底层流形中采样。如果有足够的数据(例如流形是很好的采样),我们期望每个数据点及其邻域都位于流形的局部线性面片上或附近。我们用线性系数来描述这些斑块的局部几何特征,这些系数从相邻的数据点重建每个数据点。重建误差用代价函数来度量
在这里插入图片描述
它将所有数据点及其重建之间的平方距离相加。权值总结了第j个数据点对第i个重建的贡献。为了计算权重Wij,我们在两个约束条件下最小化代价函数:第一,每个数据点Xi仅从其邻域(5)重构,如果Xj不属于Xi的邻域集合,则强制Wij=0;第二,权重矩阵的行和为1:∑jWij=1。通过求解一个最小二乘问题(7),得到了这些约束条件下的最优权重(6)。
在这里插入图片描述

Fig. 2. Steps of locally linear embedding: (1) Assign neighbors to each data point Xi (for example by using the K nearest neighbors). (2) Compute the weights Wij that best linearly reconstruct Xi from its neighbors, solving the constrained least-squares problem in Eq. 1. (3) Compute the low-dimensional embedding vectors Yi best reconstructed by Wij, minimizing Eq. 2 by Þnding the smallest eigenmodes of the sparse symmetric matrix in Eq. 3. Although the weights Wij and vectors Yi are computed by methods in linear algebra, the constraint that points are only reconstructed from neighbors can result in highly nonlinear embeddings.

图2。局部线性嵌入的步骤:(1)为每个数据点Xi指定邻域(例如使用K个最近邻)。(2) 计算从其邻域中最佳线性重构Xi的权重Wij,求解等式1中的约束最小二乘问题。(3) 计算Wij最佳重构的低维嵌入向量Yi,通过求出方程3中稀疏对称矩阵的最小本征模,使方程2最小化。虽然权重Wij和向量Yi是用线性代数的方法来计算的,但是点只从邻域重构的约束会导致高度非线性的嵌入。

The constrained weights that minimize these reconstruction errors obey an important symmetry: for any particular data point, they are invariant to rotations, rescalings, and translations of that data point and its neighbors. By symmetry, it follows that the reconstruction weights characterize intrinsic geometric properties of each neighborhood, as opposed to properties that depend on a particular frame of reference (8). Note that the invariance to translations is specifically enforced by the sum-to-one constraint on the rows of the weight matrix.

使这些重建误差最小化的约束权重服从一个重要的对称性:对于任何特定的数据点,它们对该数据点及其相邻点的旋转、重定位和平移都是不变性的。根据对称性,重构权值表征了每个邻域的固有几何特性,而不是依赖于特定参考系的特性(8)。注意,对平移的不变性是通过权重矩阵行上的和到一约束来实现的。

Suppose the data lie on or near a smooth nonlinear manifold of lower dimensionality d<<D. To a good approximation then, there exists a linear mapping— consisting of a translation, rotation, and rescaling—that maps the high-dimensional coordinates of each neighborhood to global internal coordinates on the manifold. By design, the reconstruction weights Wij reflect intrinsic geometric properties of the data that are invariant to exactly such transformations. We therefore expect their characterization of local geometry in the original data space to be equally valid for local patches on the manifold. In particular, the same weights Wij that reconstruct the ith data point in D dimensions should also reconstruct its embedded manifold coordinates in d dimensions.

假设数据位于低维d<<D的光滑非线性流形上或其附近。为了获得良好的近似值,存在一个线性映射-包括平移、旋转和重缩放,将每个邻域的高维坐标映射到流形上的全局内部坐标。通过设计,重构权值反映了数据的内在几何属性,这些属性对这些变换是不变性的。因此,我们期望它们在原始数据空间中的局部几何特征对于流形上的局部补片同样有效。特别地,在D维中重建第i个数据点的相同权重Wij也应该重建其在D维中嵌入的流形坐标。

LLE constructs a neighborhood-preserving mapping based on the above idea. In the final step of the algorithm, each high-dimensional observation Xi is mapped to a low-dimensional vector Yi representing global internal coordinates on the manifold. This is done by choosing d-dimensional coordinates Yi to minimize the embedding cost function
在这里插入图片描述
This cost function, like the previous one, is based on locally linear reconstruction errors, but here we fix the weights Wij while optimizing the coordinates Yi. The embedding cost in Eg.2 defines a quadratic form in the vectors Yi. Subject to constraints that make the problem well-posed, it can be minimized by solving a sparse N*N eigenvalue problem (9), whose bottom d nonzero eigenvectors provide an ordered set of orthogonal coordinates centered on the origin.

基于上述思想,LLE构造了一个保持邻域的映射。在算法的最后一步,每个高维观测值Xi被映射到表示流形上全局内部坐标的低维向量Yi。这是通过选择d维坐标Yi来最小化嵌入代价函数来实现的
在这里插入图片描述
这个代价函数和前一个一样,是基于局部线性重建误差,但是这里我们在优化坐标Yi的同时固定了权重Wij。Eg.2中的嵌入代价在向量Yi中定义了一个二次型。受限于使问题适定的约束条件,可以通过求解稀疏N*N特征值问题(9)来最小化问题,该问题的底部d非零特征向量提供了一组以原点为中心的有序正交坐标。

Implementation of the algorithm is straightforward. In our experiments, data points were reconstructed from their K nearest neighbors, as measured by Euclidean distance or normalized dot products. For such implementations of LLE, the algorithm has only one free parameter: the number of neighbors, K.Once neighbors are chosen, the optimal weights Wij and coordinates Yi are computed by standard methods in linear algebra.The algorithm involves a single pass through the three steps in Fig. 2 and finds global minima of the reconstruction and embedding costs in Egs. 1 and 2.

算法的实现很简单。在我们的实验中,数据点是通过欧几里德距离或标准化点积来重建的。对于这种LLE实现,算法只有一个自由参数:邻域数K。一旦选择了邻域,用线性化的标准方法计算最优权值Wij和坐标Yi代数该算法包括一次通过图2中的三个步骤,并在Egs中找到重建和嵌入成本的全局最小值。1和2。
在这里插入图片描述
Fig. 3. Images of faces (11) mapped into the embedding space described by the Þrst two coordinates of LLE. Representative faces are shown next to circled points in different parts of the space. The bottom images correspond to points along the top-right path (linked by solid line), illustrating one particular mode of variability in pose and expression.

图3。映射到由LLE的前两个坐标描述的嵌入空间中的脸部(11)的图像。代表性的脸显示在空间不同部分的圆圈点旁边。底部图像对应于右上角路径上的点(用实线连接),说明了姿势和表情的一种特殊变化模式。
在这里插入图片描述
Fig. 4. Arranging words in a continuous semantic space. Each word was initially represented by a high-dimensional vector that counted the number of times it appeared in different encyclopedia articles. LLE was applied to these word-document count vectors (12), resulting in an embedding location for each word. Shown are words from two different bounded regions (A) and (B) of the embedding space discovered by LLE. Each panel shows a twodimensional projection onto the third and fourth coordinates of LLE; in these two dimensions, the regions (A) and (B) are highly overlapped. The inset in (A) shows a three-dimensional projection onto the third, fourth, and Þfth coordinates, revealing an extra dimension along which regions (A) and (B) are more separated. Words that lie in the intersection of both regions are capitalized. Note how LLE colocates words with similar contexts in this continuous semantic space.

图4。在一个连续的语义空间中排列单词。每个单词最初由一个高维向量表示,该向量计算它在不同百科全书文章中出现的次数。将LLE应用于这些单词文档计数向量(12),从而为每个单词生成一个嵌入位置。所示为LLE发现的嵌入空间的两个不同有界区域(A)和(B)中的单词。每个面板显示一个二维投影到LLE的第三和第四个坐标上;在这两个维度中,区域(a)和(B)高度重叠。(A)中的插图显示了一个三维投影到第三、第四和Þfth坐标,显示了一个额外的维度,沿着该维度,区域(A)和(B)更加分离。位于两个区域相交处的单词将大写。注意LLE是如何在这个连续的语义空间中把具有相似上下文的单词组合起来的。

In addition to the example in Fig. 1, for which the true manifold structure was known (10), we also applied LLE to images of faces (11) and vectors of word-document counts (12). Two-dimensional embeddings of faces and words are shown in Figs. 3 and 4. Note how the coordinates of these embedding spaces are related to meaningful attributes, such as the pose and expression of human faces and the semantic associations of words.

除了图1中已知真实流形结构的示例(10),我们还将LLE应用于面部(11)的图像和单词文档计数的向量(12)。图3和中显示了人脸和单词的二维嵌入4。注意这些嵌入空间的坐标是如何与有意义的属性相关的,比如人脸的姿势和表情以及单词的语义关联。

Many popular learning algorithms for nonlinear dimensionality reduction do not share the favorable properties of LLE. Iterative hill-climbing methods for autoencoder neural networks (13, 14), self-organizing maps (15), and latent variable models (16) do not have the same guarantees of global optimality or convergence;they also tend to involve many more free parameters, such as learning rates, convergence criteria, and architectural specifications. Finally, whereas other nonlinear methods rely on deterministic annealing schemes (17) to avoid local minima, the optimizations of LLE are especially tractable.

许多流行的非线性降维学习算法都不具备LLE的优点。自动编码器神经网络(13,14)、自组织映射(15)和潜在变量模型(16)的迭代爬山方法不具有相同的全局最优或收敛性保证;它们也倾向于涉及更多的自由参数,例如学习速率、收敛准则和体系结构规范。最后,虽然其他非线性方法依赖于确定性退火方案(17)来避免局部极小,但LLE的优化尤其容易处理。

LLE scales well with the intrinsic manifold dimensionality, d, and does not require a discretized gridding of the embedding space. As more dimensions are added to the embedding space, the existing ones do not change, so that LLE does not have to be rerun to compute higher dimensional embeddings. Unlike methods such as principal curves and surfaces (18) or additive component models (19), LLE is not limited in practice to manifolds of extremely low dimensionality or codimensionality. Also, the intrinsic value of d can itself be estimated by analyzing a reciprocal cost function, in which reconstruction weights derived from the embedding vectors Yi are applied to the data points Xi.

LLE可以很好地适应内在流形维数d,并且不需要嵌入空间的离散网格。当嵌入空间中加入更多的维数时,现有维数不变,因此不必重新运行LLE来计算高维嵌入。与主曲线曲面(18)或可加成分模型(19)等方法不同,LLE在实际中并不局限于极低维或余维的流形。此外,d本身的内在值可以通过分析一个互反代价函数来估计,在该函数中,由嵌入向量Yi导出的重建权重被应用于数据点Xi。

LLE illustrates a general principle of manifold learning, elucidated by Martinetz and Schulten (20) and Tenenbaum (4), that overlapping local neighborhoods—collectively analyzed—can provide information about global geometry.Many virtues of LLE are shared by Tenenbaum’s algorithm, Isomap, which has been successfully applied to similar problems in nonlinear dimensionality reduction. Isomap’s embeddings, however, are optimized to preserve geodesic distances between general pairs of data points, which can only be estimated by computing shortest paths through large sublattices of data. LLE takes a different approach, analyzing local symmetries, linear coefficients, and reconstruction errors instead of global constraints, pairwise distances, and stress functions. It thus avoids the need to solve large dynamic programming problems, and it also tends to accumulate very sparse matrices, whose structure can be exploited for savings in time and space.

LLE说明了流形学习的一般原理,由Martinetz和Schulten(20)和Tenenbaum(4)阐明,集体分析的重叠局部邻域可以提供关于全局的信息几何学。Tenenbaum算法和Isomap具有LLE的许多优点,已成功地应用于非线性降维中的类似问题。然而,Isomap的嵌入被优化以保持一般数据点对之间的测地距离,这只能通过计算穿过大数据子格的最短路径来估计。LLE采用不同的方法,分析局部对称性、线性系数和重建误差,而不是全局约束、成对距离和应力函数。因此,它避免了求解大型动态规划问题的需要,而且它也倾向于积累非常稀疏的矩阵,利用这些矩阵的结构可以节省时间和空间。

LLE is likely to be even more useful in combination with other methods in data analysis and statistical learning. For example, a parametric mapping between the observation and embedding spaces could be learned by supervised neural networks (21) whose target values are generated by LLE. LLE can also be generalized to harder settings, such as the case of disjoint data manifolds (22), and specialized to simpler ones, such as the case of time-ordered observations (23).

在数据分析和统计学习中,LLE与其他方法相结合可能会更加有用。例如,观察空间和嵌入空间之间的参数映射可以由监督神经网络(21)学习,其目标值由LLE生成。LLE还可以推广到更困难的设置,例如不相交的数据流形(22),并专门用于更简单的设置,例如时序观测的情况(23)。

Perhaps the greatest potential lies in applying LLE to diverse problems beyond those considered here. Given the broad appeal of traditional methods, such as PCA and MDS, the algorithm should find widespread use in many areas of science.

也许最大的潜力在于将LLE应用于这里所讨论的问题之外的各种问题。鉴于PCA和MDS等传统方法的广泛应用,该算法应该在许多科学领域得到广泛的应用。

  • 0
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值