【论文笔记】Embedding of Embedding (EOE) : Joint Embedding for Coupled Heterogeneous Networks

一种network embedding 的思路
网络结点向量化

这种向量化可以表示出不同网络之间的连边(通过引入一个矩阵(harmonious embedding matrix))

Future work:
1. 多个网络的损失函数更复杂的结合方式(目前是简单的相加)
2. 从两个网络扩展到多个网络

Applications

  • visualization
  • link prediction
  • multi-class classification and multi-label classification.

Particularly, multi-class classification and multi-label classification in networks are similar to community detection.

abstract & introduction

The authors introduced the basic idea of network embedding. And they claim that features in the latent space is very important.

Previous methods focus on the intra-network, and most of them are designed for dimension reduction of existing features. The authors propose the embedding of inter-network edges of two different type networks(like author-word networks, as a heterogeneous networks), and try to mine the latent features from the inter-network edges.

They also raised a concept of harmonious embedding matrix to further embed the embeddings that only encode intra-network edges.

They propose an alternating optimization algorithm to solve the learning objective of the EOE in which the learning objective is optimized with respect to one type of variable at a time until convergence.

A couple of graph or network embedding methods have been proposed previously, but they are originally designed for dimension reduction of existing features . Specifically, their objectives are to learn low-dimensional latent representations of existing features so that learning complexity brought by feature dimension would be significantly reduced.

LINE preserves both interaction information and non-interaction information, which is similar to the proposed EOE. But the proposed EOE model differs from LINE in the formulation of cost function. The proposed EOE is designed for embedding of couple heterogeneous networks.

Prelimimaries

To reconcile the heterogeneities of the two latent spaces, they introduce a harmonious embedding matrix to further embed the embeddings from one latent space to another latent space.
As:

image_1c1i677aoupl1rloir31e1mhiv19.png-75.1kB

And the harmonious embedding matrix is:
image_1c1i68fmsv941b4o1mtsi5h1nk11m.png-35kB

Model

The difination of loss function:

To cast both these two regulations to an optimization problem, small probabilities of pairs of vertices with edges and large probabilities of pairs of vertices without edges should be penalized.

*And now I am wondering:
If the application of the algorithm is to predict links between nodes, how does it work if we penalize large probabilities of pairs of vertices without edges?
1. If the algo there has both train and predict process, the method is understandable.
2. If we just use the algo straight forward to vectorize nodes, it is not good.*

The truth comes out with the first assumption. And there is no problem.

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
"学习指标来自教师:紧凑网络用于图像嵌入" 最近,研究人员提出了一种新的深度学习方法,通过从教师那里学习图像嵌入的紧凑网络。这种紧凑的网络结构具有很高的学习能力和计算效率,可以在训练过程中捕捉重要的图像特征。 所谓图像嵌入,是指将图像转换成低维度的特征向量。这个向量可以用来比较不同图像之间的相似性,或者作为输入传递给其他机器学习模型进行进一步的分析和处理。传统的图像嵌入方法通常基于手工设计的特征提取器,而这种新的方法则通过学习从教师模型中提取特征来实现。 在这种方法中,研究人员首先使用一个强大的教师模型对大量图像进行训练,以生成高质量的图像嵌入。然后,他们设计一个紧凑的网络结构,使用教师模型生成的嵌入向量作为目标。通过最小化教师模型与紧凑网络之间的距离,紧凑网络逐渐学会生成类似于教师模型的图像嵌入。 这种方法有几个优点。首先,它可以在不需要额外标记的情况下训练紧凑网络,因为教师模型已经提供了高质量的嵌入向量作为目标。其次,紧凑网络结构相对简单,计算效率高,可以轻松应用于大规模图像数据集。此外,通过从教师的知识中学习,紧凑网络可以获得更好的图像嵌入性能。 然而,这种方法也存在一些挑战。首先,选择一个合适的教师模型是关键。教师模型应该具有强大的特征提取能力,并且能够生成高质量的图像嵌入。其次,紧凑网络的结构设计也要考虑到充分利用教师模型的知识,并且要在保持计算效率的同时保持高质量的嵌入生成。 总之,学习从教师那里生成图像嵌入的紧凑网络是一个有前景的研究方向。通过从教师模型中学习,紧凑网络可以获得高质量的图像嵌入,为图像比较、分类和检索等任务提供有用的特征。未来的研究应该集中在提高教师模型的性能、设计更强大的紧凑网络结构,以及将这种方法应用于更广泛的领域。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值