【PhD. in USM】2022/6 Study Notes

Study Target:

  • understand of the current state of machine learning development
  • study about GNN
  • study about pretrain model about BERT-class multimodal
  • review at least 20 papers

Studied Contents:

NLP:
  1. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
GNN:
  1. Literature:Vision GNN: An Image is Worth Graph of Nodes
  2. Literature:Dynamic graph cnn for learning on point clouds
  3. Literature:Deepgcns: Can gcns go as deep as cnns?
BERT-class multimodal model
  1. Literature:LXMERT: Learning Cross-Modality Encoder Representations from Transformers
  2. Literature:ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks
Vision Transformer
  1. Vision Xformers: Efficient Attention for Image Classification
  2. A Survey of Visual Transformers
Thesis
  1. Visual and Textual Common Semantic Spaces for the Analysis of Multimodal Content
Video
  1. CMU’s Multimodal Machine Learning course (11-777), Fall 2020 - YouTube

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值