GPT-GNN

思路:
首先,想在无标记的图结构中获得内在语义和结构属性?
问题:How to design an unsupervised learning task over the graph for pre-training the GNN model?
措施:p(G;θ):model the likelihood over this graph by this GNN, and representing how the nodes in G are attributed and connected
措施:最大化p(G;θ) 在这里插入图片描述
问题:how to model the conditional probality Pθ?
措施:dependency-aware factorization mechnism
措施:属性生成+边生成
问题:如何有效的优化属性生成和边生成任务?
措施:每个节点分为两部分,属性生成节点和边生成节点,得到属性生成和边生成的输出嵌入,计算属性生成和边生成的损失Loss

实验:
Different transfer settings.—time+filed+time/filed
结论:
the proposed generative pre-training strategy enables the GNN model to capture the generic structural and semantic knowledge of the input graph, which can be used to fine-tune on the unseen part of the graph data.

实验:
Ablation studies on pre-training tasks.—属性生成和边生成
结论:
the GPT-GNN framework benefits differently from attribute and edge generations on different datasets. However, combining the two pre-training tasks together produces the best performance on both cases.
the proposed graph generation tasks can give informative self-supervision for GNN pre-training

实验:
Ablation studies on the node separation and adaptive queue.
结论:
This demonstrates the significance of this node separation design in avoiding attribute information leakage.
This indicates that adding more negative samples by using the adaptive queue is indeed helpful to the pre-training framework.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值