【GNN报告】复旦大学黄增峰:从优化的角度理解图神经网络

目录

1、简介

2、从优化的角度理解图神经网络

 背景

3、小结

4、参考

报告参考

文献参考


1、简介

报告时间

2022年7月2日 (星期六)

15:00 (北京时间)

主  题

Optimization Perspectives on Graph Neural Networks

主持人

周敏(华为诺亚方舟实验室)

报告嘉宾:黄增峰(复旦大学)

报告题目:Optimization Perspectives on Graph Neural Networks

报告摘要:

In the recent few years, graph neural network (GNN) has emerged as a major tool for graph machine learning and has found numerous applications. In this talk, I will introduce an optimization-based framework for understanding and deriving GNNs model, which involves treating graph propagation as unfolded descent iterations as applied to some graph-regularized energy function. Then I will talk about a graph attention model obtained from this framework through a robust objective. I will also briefly talk about the connection and difference between this unfolding scheme and implicit GNNs, which treats node representations as the fixed points of a deep equilibrium model.

报告人简介:

Zengfeng Huang is currently an Associate Professor in the School of Data Science, Fudan University. Before that he was a Research Fellow in CSE, UNSW and a Postdoc in MADALGO, Aarhus University. He obtained his PhD at Hong Kong University of Science and Technology in CSE and B.S. degree in Computer Science from Zhejiang University. His research interests are foundations of data science, machine learning algorithms, graph analytics, and theoretical computer science. His single-authored paper, “Near Optimal Frequent Directions for Sketching Dense and Sparse Matrices”, is the winner of ICML 2018 Best Paper Runner Up Award and 2020 World Artificial Intelligence Conference Youth Outstanding Paper Nomination Award.

2、从优化的角度理解图神经网络

 背景

 

 

 

 

 

 

 

 

 

 

 

 没叠一层GNN,能量下降

相较于04年Zhou的改进是带参的 

 

 

 为什么梯度下降可以认为能量最小呢?他们具有等价性?

梯度下降刚好对应GNN每层,也就是刚好和GNN迭代一样的

 

 

 

 

 

 

 

 

 

3、小结

从能量的角度优化GNN,看不懂

4、参考

报告参考

LOGS第2022/07/02期复旦大学黄增峰:从优化的角度理解图神经网络(GNN)_哔哩哔哩_bilibili

录播视频链接 || LOGS 第2022/07/02期 || 复旦大学黄增峰:从优化的角度理解图神经网络

文献参考

1.Scaling up graph neural networks via graph coarsening. KDD 2021

2.Graph neural networks inspired by classical iterative algorithms. ICML 2021

3.Implicit vs unfolded graph neural networks. arXiv preprint arXiv:2111.06592

4.Transformers from an Optimization Perspective. arXiv preprint arXiv:2205.13891

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

静静喜欢大白

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值