今日论文阅读2022-11-10

多模态预训练论文
ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks
vision-and-language tasks:
visual question answering,visual commonsense reasoning, referring expressions, and caption-based image retrieval and  a special experiment setting

 

key technical innovation:
introducing separate streams for vision and language processing that communicate through co-attentional transformer layers.
why two-stream?

 

notes:
Given an image I represented as a set of region features v 1 , . . . , v T and a text input w 0 , . . . , w T , our model outputs fifinal representations h v 0 , . . . , h v T and  h w 0 , . . . , h wT . Notice that
exchange between the two streams is restricted to be between specifific layers and that the text stream has signifificantly more processing before interacting with visual features – matching our intuitions that our chosen visual features are already fairly high-level and require
limited context-aggregation compared to words in a sentence.
 

 

The first work is over.

 

 

 
V ISUAL BERT: A Simple And Performant Baseline For Vision And Language
two visually-grounded language model objectives for pre-training:
(1) part of the text is masked and the model learns to predict the masked words based on the remaining text and visual context;
(2) the model is trained to determine whether the provided text matches the image. We
show that such pre-training on image caption data is important for VisualBERT to learn transferable text and visual representations.
 conduct comprehensive experiments on four vision-and-language tasks:VQA VCR NLVR
regionto-phrase grounding

 

 

The second work is over.

Unicoder-VL: A Universal Encoder for Vision and Language by Cross-Modal Pre-Training

 

approach

Pre-training Tasks:MLM MOC VLM
Fine-tune on Downstream Tasks:Image-Text Retrieval.Visual Commonsense Reasoning.and

 

The third word is over.

LXMERT: Learning Cross-Modality Encoder Representations from Transformers
It consists of three Transformer : encoders: an object relationship encoder, a language encoder, and across-modality encoder.

 

pre-train our model with fifive diverse representative tasks:
(1) masked cross modality language modeling
(2) masked object prediction via RoI-feature regression
(3) masked object prediction via detected-label classifification,
(4) cross-modality matching
(5) image question answering.

 

over

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值