ai讲师老师人工智能培训讲师计算机视觉讲师叶梓:计算机视觉领域的自监督学习模型——MAE-11

接上一篇

P24P25

MAE的编码器部分

n Our encoder is a ViT but applied only on visible, unmasked patches .
n Just as in a standard ViT , our encoder embeds patches by a linear projection with added positional embeddings, and then processes the resulting set via a series of Transformer blocks.
n However, our encoder only operates on a small subset (e.g., 25%) of the full set. Masked patches are removed; no mask tokens are used.
n This allows us to train very large encoders with only a fraction of compute and memory.
n The full set is handled by a lightweight decoder , described next.

MAE的解码器部分

n The input to the MAE decoder is the full set of tokens consisting of ( i ) encoded visible patches , and (ii) mask tokens .
n Each mask token is a shared, learned vector that indicates the presence of a missing patch to be predicted.
n We add positional embeddings to all tokens in this full set; without this, mask tokens would have no information about their location in the image.
n The decoder has another series of Transformer blocks.


未完,下一篇继续…… 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

人工智能大模型讲师培训咨询叶梓

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值