VIT中PatchEmbedding和Mlp的实现(paddle2.2版本)

在PatchEmbedding中,我们设置patch的大小为 7 ∗ 7 7*7 77,输出通道数为16,因此原始 224 ∗ 224 ∗ 3 224*224*3 2242243的图片会首先变成 32 ∗ 32 ∗ 16 32*32*16 323216,这里暂且忽略batchsize,之后将 32 ∗ 32 32*32 3232拉平,变成 1024 ∗ 16 1024*16 102416
在Mlp中,其实就是两层全连接层,该mlp一般接在attention层后面。首先将16的通道膨胀4倍到64,然后再缩小4倍,最终保持通道数不变。

# ViT Online Class
# Author: Dr. Zhu
# Project: PaddleViT (https://github.com/BR-IDL/PaddleViT)
# 2021.11
import paddle
import paddle.nn as nn
import numpy as np
from PIL import Image

paddle.set_device('cpu')

class Identity(nn.Layer):
    def __init__(self):
        super().__init__()

    def forward(self, x):
        return x


class Mlp(nn.Layer):
    def __init__(self, embed_dim, mlp_ratio=4.0, dropout=0.):
        super().__init__()
        self.fc1 = nn.Linear(embed_dim, int(embed_dim * mlp_ratio))
        self.fc2 = nn.Linear(int(embed_dim * mlp_ratio), embed_dim)
        self.act = nn.GELU()
        self.dropout = nn.Dropout(dropout)

    def forward(self, x):
        x = self.fc1(x)
        x = self.act(x)
        x = self.dropout(x)
        x = self.fc2(x)
        return x


class PatchEmbedding(nn.Layer):
    def __init__(self, image_size, patch_size, in_channels, embed_dim, dropout=0.):
        super().__init__()
        self.patch_embedding = nn.Conv2D(in_channels, embed_dim, patch_size, patch_size)
        self.dropout = nn.Dropout(dropout)

    def forward(self, x):
        # [n, c, h, w]
        x = self.patch_embedding(x) # [n, c', h', w']
        x = x.flatten(2) # [n, c', h'*w']
        x = x.transpose([0, 2, 1]) # [n, h'*w', c']
        x = self.dropout(x)
        return x


class Attention(nn.Layer):
    def __init__(self):
        super().__init__()

    def forward(self, x):
        return x


class EncoderLayer(nn.Layer):
    def __init__(self, embed_dim):
        super().__init__()
        self.attn_norm = nn.LayerNorm(embed_dim)
        self.attn = Attention()
        self.mlp_norm = nn.LayerNorm(embed_dim)
        self.mlp = Mlp(embed_dim)

    def forward(self, x):
        h = x 
        x = self.attn_norm(x)
        x = self.attn(x)
        x = x + h

        h = x
        x = self.mlp_norm(x)
        x = self.mlp(x)
        x = x + h
        return x


class ViT(nn.Layer):
    def __init__(self):
        super().__init__()
        self.patch_embed = PatchEmbedding(224, 7, 3, 16)
        layer_list = [EncoderLayer(16) for i in range(5)]
        self.encoders = nn.LayerList(layer_list)
        self.head = nn.Linear(16, 10)
        self.avgpool = nn.AdaptiveAvgPool1D(1)
        self.norm = nn.LayerNorm(16)

    def forward(self, x):
        x = self.patch_embed(x) # [n, h*w, c]: 4, 1024, 16
        for encoder in self.encoders:
            x = encoder(x)
        # avg
        x = self.norm(x)
        x = x.transpose([0, 2, 1])
        x = self.avgpool(x)
        x = x.flatten(1)
        x = self.head(x)
        return x


def main():
    t = paddle.randn([4, 3, 224, 224])
    model = ViT()
    out = model(t)
    print(out.shape)


if __name__ == "__main__":
    main()

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
ViTVision Transformer,position embedding是用来为每个patch分配一个位置信息的。在NLP,不同的词汇之间是有顺序的,因此需要位置编码来表示它们的相对位置。而在视觉领域,图像与图像之间是没有顺序的,但是ViT将图像划分为一个个patch,每个patch对应于NLP的一个Token,并且每个patch都有一个位置。因此,在ViT,为了引入位置信息,每个特征维度都加入了一个position embedding模块。这个position embedding模块会为每个patch生成一个位置向量,用来表示该patch在图像的位置。在高分辨率图像做微调时,作者建议保持patch size不变,直接对position embedding向量进行插值处理,以适应不同分辨率的图像。具体来说,就是对position embedding向量进行插值,使其与新的图像分辨率相匹配。\[1\]\[2\]\[3\] #### 引用[.reference_title] - *1* *2* [【ViT 微调时关于position embedding如何插值(interpolate)的详解】](https://blog.csdn.net/qq_44166630/article/details/127429697)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control_2,239^v3^insert_chatgpt"}} ] [.reference_item] - *3* [关于ViTpos embed的可视化](https://blog.csdn.net/weixin_41978699/article/details/122404192)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control_2,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值