Graph Embedding学习笔记(1):Locally Linear Embedding (LLE)

论文信息

Roweis, Sam T. and Laurence K. Saul (2000). “Nonlinear Dimensionality
Reduction by Locally Linear Embedding.” Science, 290: 2323–2326.
doi:10.1126/science.290.5500.2323.

we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima.

笔记

LLE的本质是一种降维方法。主成分分析PCA是一种线性的降维方法,而LLE是一种非线性的降维方法。

近年来机器学习领域流行把降维以embedding的名义出现,具体含义是:When some object X is said to be embedded in another object Y, the embedding is given by some injective and structure-preserving map f : X → Y

关键:LLE的特性可以理解为neighborhood-preserving

LLE对流形数据保持neighborhood的效果比PCA好很多。什么是流形数据?比如下图这根螺旋状的曲线。

clipboard.png

如果用PCA对这种数据进行降维,即用第一主成分来描述这根曲线,是无法保留数据螺旋形状的顺序(即降维后的坐标从最密的中心点开始,沿着螺旋结构逐步往外扩)。下图中的直线就是第一主成分的结果,可以看到只捕获到了方差最大的方向,structure-preserving的效果很差,根本原因是线性降维无法表达螺旋这种非线性结构:

clipboard.png

那么,有什么方法能改进上面的结果呢?我们取出螺旋数据的一个局部,对这个局部用PCA,我们取出来的局部曲线曲度比较小,接近直线,这个使用PCA就可以很好地拟合曲线:

clipboard.png

LLE的核心思想就是这种截取局部线性拟合的思路。我们看一下LLE作用后的效果:

clipboard.png

再举一个三维空间的例子:

clipboard.png

看一下图片识别的例子,横轴和纵轴是LLE的头两个坐标轴。对于横轴而言,图片人物的表情逐步从不开心变为开心;对于纵轴而言,图片人物脸的朝向从一侧逐步变为正面再到另外一侧。

clipboard.png

LLE的基本流程如下图所示:

clipboard.png

基本公式如下:

clipboard.png

以第三步为例,看一下怎么转换为特征值求解问题:

clipboard.png

下一步用朗格朗日乘子转化为无约束问题:

clipboard.png

接着求导,发现是M的特征值求解问题,因为目标是最小值,我们取出最小的特征值作为结果:

clipboard.png

R语言实现

# Local linear embedding of data vectors
# Inputs: n*p matrix of vectors, number of dimensions q to find (< p),
# number of nearest neighbors per vector, scalar regularization setting
# Calls: find.kNNs, reconstruction.weights, coords.from.weights
# Output: n*q matrix of new coordinates
lle <- function(x,q,k=q+1,alpha=0.01) {
  stopifnot(q>0, q<ncol(x), k>q, alpha>0) # sanity checks
  kNNs = find.kNNs(x,k) # should return an n*k matrix of indices
  w = reconstruction.weights(x,kNNs,alpha) # n*n weight matrix
  coords = coords.from.weights(w,q) # n*q coordinate matrix
  return(coords)
}

# Find multiple nearest neighbors in a data frame
# Inputs: n*p matrix of data vectors, number of neighbors to find,
# optional arguments to dist function
# Calls: smallest.by.rows
# Output: n*k matrix of the indices of nearest neighbors
find.kNNs <- function(x,k,...) {
  x.distances = dist(x,...) # Uses the built-in distance function
  x.distances = as.matrix(x.distances) # need to make it a matrix
  kNNs = smallest.by.rows(x.distances,k+1) # see text for +1
  return(kNNs[,-1]) # see text for -1
}

# Find the k smallest entries in each row of an array
# Inputs: n*p array, p >= k, number of smallest entries to find
# Output: n*k array of column indices for smallest entries per row
smallest.by.rows <- function(m,k) {
  stopifnot(ncol(m) >= k) # Otherwise "k smallest" is meaningless
  row.orders = t(apply(m,1,order))
  k.smallest = row.orders[,1:k]
  return(k.smallest)
}

# Least-squares weights for linear approx. of data from neighbors
# Inputs: n*p matrix of vectors, n*k matrix of neighbor indices,
# scalar regularization setting
# Calls: local.weights
# Outputs: n*n matrix of weights
reconstruction.weights <- function(x,neighbors,alpha) {
  stopifnot(is.matrix(x),is.matrix(neighbors),alpha>0)
  n=nrow(x)
  stopifnot(nrow(neighbors) == n)
  w = matrix(0,nrow=n,ncol=n)
  for (i in 1:n) {
    i.neighbors = neighbors[i,]
    w[i,i.neighbors] = local.weights(x[i,],x[i.neighbors,],alpha)
  }
  return(w)
}


# Calculate local reconstruction weights from vectors
# Inputs: focal vector (1*p matrix), k*p matrix of neighbors,
# scalar regularization setting
# Outputs: length k vector of weights, summing to 1
local.weights <- function(focal,neighbors,alpha) {
  # basic matrix-shape sanity checks
  stopifnot(nrow(focal)==1,ncol(focal)==ncol(neighbors))
  # Should really sanity-check the rest (is.numeric, etc.)
  k = nrow(neighbors)
  # Center on the focal vector
  neighbors=t(t(neighbors)-focal) # exploits recycling rule, which
  # has a weird preference for columns
  gram = neighbors %*% t(neighbors)
  # Try to solve the problem without regularization
  weights = try(solve(gram,rep(1,k)))
  # The try function tries to evaluate its argument and returns
  # the value if successful; otherwise it returns an error
  # message of class "try-error"
  if (identical(class(weights),"try-error")) {
    # Un-regularized solution failed, try to regularize
    # TODO: look at the error, check if it’s something
    # regularization could fix!
    weights = solve(gram+alpha*diag(k),rep(1,k))
  }
  # Enforce the unit-sum constraint
  weights = weights/sum(weights)
  return(weights)
}

# Get approximation weights from indices of point and neighbors
# Inputs: index of focal point, n*p matrix of vectors, n*k matrix
# of nearest neighbor indices, scalar regularization setting
# Calls: local.weights
# Output: vector of n reconstruction weights
local.weights.for.index <- function(focal,x,NNs,alpha) {
  n = nrow(x)
  stopifnot(n> 0, 0 < focal, focal <= n, nrow(NNs)==n)
  w = rep(0,n)
  neighbors = NNs[focal,]
  wts = local.weights(x[focal,],x[neighbors,],alpha)
  w[neighbors] = wts
  return(w)
}

# Local linear approximation weights, without iteration
# Inputs: n*p matrix of vectors, n*k matrix of neighbor indices,
# scalar regularization setting
# Calls: local.weights.for.index
# Outputs: n*n matrix of reconstruction weights
reconstruction.weights.2 <- function(x,neighbors,alpha) {
  # Sanity-checking should go here
  n = nrow(x)
  w = sapply(1:n,local.weights.for.index,x=x,NNs=neighbors,
             alpha=alpha)
  w = t(w) # sapply returns the transpose of the matrix we want
  return(w)
}

# Find intrinsic coordinates from local linear approximation weights
# Inputs: n*n matrix of weights, number of dimensions q, numerical
# tolerance for checking the row-sum constraint on the weights
# Output: n*q matrix of new coordinates on the manifold
coords.from.weights <- function(w,q,tol=1e-7) {
  n=nrow(w)
  stopifnot(ncol(w)==n) # Needs to be square
  # Check that the weights are normalized
  # to within tol > 0 to handle round-off error
  stopifnot(all(abs(rowSums(w)-1) < tol))
  # Make the Laplacian
  M = t(diag(n)-w)%*%(diag(n)-w)
  # diag(n) is n*n identity matrix
  soln = eigen(M) # eigenvalues and eigenvectors (here,
  # eigenfunctions), in order of decreasing eigenvalue
  coords = soln$vectors[,((n-q):(n-1))] # bottom eigenfunctions
  # except for the trivial one
  return(coords)
}

小结

通过以下方式,我们可以将LLE算法用于Graph Embedding:

  • 寻找neighborhood:直接用Graph的邻接结构表示neighborhood
  • 计算linear weights:直接用邻接矩阵W
  • 生成embedding:计算矩阵M特征值,当节点数为n,embedding为q维时,取[n-q, n-1]的特征向量为embedding结果

clipboard.png

附录

Graph Embedding综述, 2018

滴滴KDD2018

LLE讲义

LLE伪代码

LLE Introduction

LLE Science论文

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值