node2vec原理
by Zohar Komarovsky
由Zohar Komarovsky
node2vec的工作原理-word2vec无法做到的 (How node2vec works — and what it can do that word2vec can’t)
如何以不同的方式考虑您的数据 (How to think about your data differently)
In the last couple of years, deep learning (DL) has become the main enabler for applications in many domains such as vision, NLP, audio, clickstream data etc. Recently, researchers started to successfully apply deep learning methods to graph datasets in domains like social networks, recommender systems, and biology, where data is inherently structured in a graphical way.
在过去的两年中,深度学习(DL)已成为视觉,NLP,音频,点击流数据等许多领域应用的主要推动力。最近,研究人员开始成功地将深度学习方法应用于诸如以下领域的图形数据集社交网络,推荐系统和生物学,其中数据以图形方式固有地结构化。
So how do Graph Neural Networks work? Why do we need them?
那么图神经网络如何工作? 我们为什么需要它们?
深度学习的前提 (The Premise of Deep Learning)
In machine learning tasks that involve graphical data, we usually want to describe each node in the graph in a way that allows us to feed it into some machine learning algorithm. Without DL, one would have to manually extract features, such as the number of neighbors a node has. But this is a laborious job.
在涉及图形数据的机器学习任务中,我们通常希望以一种允许将其馈入某种机器学习算法的方式来描述图中的每个节点。 如果没有DL,则必须手动提取特征,例如节点具有的邻居数。 但这是一项艰巨的工作。
This is where DL shines. It automatically exploits the structure of the graph in order to extract features for each node. These features are called embeddings.
这就是DL的魅力所在。 它会自动利用图的结构以提取每个节点的特征。 这些功能称为嵌入。
The interesting thing is, that even if you have absolutely no information about the nodes, you can still use DL to extract embeddings. The structure of the graph, that is — the connectivity patterns, hold viable information.
有趣的是,即使您完全没有节点信息,您仍然可以使用DL提取嵌入。 图的结构(即连接模式)保存可行的信息。
So how can we use the structure to extract information? Can the context of each node within the graph really help us?
那么我们如何使用该结构来提取信息呢? 图中每个节点的上下文真的可以为我们提供帮助吗?