每天读一篇论文3--Structure Pretraining and Prompt Tuning for Knowledge Graph Transfer

摘要

Knowledge graphs (KG) are essential background knowledge providers in many tasks. When designing models for KG-related tasks, one of the key tasks is to devise the Knowledge Representation and Fusion (KRF) module that learns the representation of elements from KGs and fuses them with task representations. While due to the difference of KGs and perspectives to be considered during fusion across tasks, duplicate and ad hoc KRF modules design are conducted among tasks. In this paper, we propose a novel knowledge graph pretraining model KGTransformer that could serve as a uniform KRF module in diverse KG-related tasks.
知识图谱( KG )是许多任务中必不可少的背景知识提供者。在为KG相关任务设计模型时,关键任务之一是设计知识表示与融合( Knowledge Representation and Fusion,KRF )模块,该模块从KG中学习元素的表示,并与任务表示进行融合。而由于跨任务融合时需要考虑的KGs和视角的不同,任务之间进行了重复和临时的KRF模块设计。在本文中,我们提出了一种新的知识图谱预训练模型KGTransformer,它可以作为一个统一的KRF模块应用于不同的KG相关任务。

INTRODUCTION:

Knowledge Graphs (KG) representing facts as triples in the form of (head entity, relation, tail entity), abbreviated as (h,r,t), is a common way of storing knowledge in the world, such as (Earth, location, inner Solar System)

Knowledge Graph Representation Methods: KG representation methods encode information in KGs through parameters and functions in models. They could recover the graph structures and capture semantics between entities and relations.KG表示方法通过模型中的参数和函数对KG中的信息进行编码。它们能够恢复图结构并捕获实体和关系之间的语义。

Embedding-based methods learn embeddings of relations and entities and model the truth value of triples through a score function with embeddings as inputs. After training, these embeddings could implicitly capture the similarities, hierarchies, relationships, and axioms between elements in KGs, thus could be applied as general representations of elements in many tasks to transfer semantics learned from KGs to tasks.基于嵌入的方法学习关系和实体的嵌入,并通过一个以嵌入为输入的得分函数来建模三元组的真值。经过训练,这些嵌入可以隐式地捕获知识图谱中元素之间的相似性、层次结构、关系和公理,因此可以作为许多任务中元素的通用表示,将从知识图谱中学习到的语义迁移到任务中。

CONCLUSION AND DISCUSSION:

在本文中,我们提出了一种新颖的KG预训练模型KGTransformer,并证明了在不同KG支持的多个任务中预训练具有通用知识表示和融合模块的模型是可能的。我们在具有不同图结构的混合KG上预训练KGTransformer,并在三个典型的KG相关任务上对其进行统一调优。KGTransformer在不同任务上的表现优于专门设计的模型。更重要的是,简单地将预训练好的KGTransformer应用到实际应用中,取得了很好的效果,展示了预训练好的KGTransformer具有的深度图结构迁移能力。尽管KGTransformer是有效的,但它是比传统KGEs更重的KG模型。它需要更多的内存和计算资源。未来,我们希望探索如何将预训练的KGTransformer应用于资源受限的应用,如移动应用和边缘计算。

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 3
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值