Vision Transformer 全面代码解析

Vision Transformer 全面代码解析

原创 大厂小僧 大厂小僧 2024年08月17日 14:25 北京

在这篇文章中,我们将深入探讨Vision Transformer(ViT)的代码实现细节。对于那些对ViT理论基础还不太熟悉的朋友,我推荐你们观看李宏毅教授的相关视频,或者阅读下面推荐的几篇博客。在这里,我将主要专注于代码层面的解析,不再赘述理论部分。

图片

1. 代码解析

先看下VIT的网络结构,如下图:

图片

从上方的架构图中,我们可以清晰地看到Vision Transformer(ViT)由三个核心组件构成:Patch Embedding、Transformer Encoder以及MLP Head。接下来,我们将逐步剖析这些组件的代码实现。首先,我们会理解每个单独模块的功能,随后整合这些模块,深入了解整个ViT架构的工作原理。

1.1 DropPath 模块

def drop_path(x, drop_prob: float = 0., training: bool = False):    """    Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).    This is the same as the DropConnect impl I created for EfficientNet, etc networks, however,    the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper...    See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for    changing the layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use    'survival rate' as the argument.    """        if drop_prob == 0. or not training:        return x    # 保留的分支概率    keep_prob = 1 - drop_prob    # shape (b, 1, 1, 1),其中x.ndim输出结果为x的维度,即4。目的是为了创建一个失活矩阵。    shape = (x.shape[0],) + (1,) * (x.ndim - 1)  # work with diff dim tensors, not just 2D ConvNets
    random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device)    # 向下取整用于确定保存哪些样本,floor_()是floor的原位运算    random_tensor.floor_()  # binarize    # 除以keep_drop让一部分分支失活,恒等映射    output = x.div(keep_prob) * random_tensor    return output

class DropPath(nn.Module):    """    Drop paths (Stochastic Depth) per sample  (when applied in main path of residual blocks).    """
    def __init__(self, drop_prob=None):        super(DropPath, self).__init__()        self.drop_prob = drop_prob
    def forward(self, x):        return drop_path(x, self.drop_prob, self.training)

在介绍Encoder Block中的DropPath之前,让我们先澄清一点关于DropPath的描述。实际上,DropPath并不是“删除”整个分支,而是随机地使特定路径上的特征值变为零从而在前向传播过程中有效地“丢弃”那些路径。与Dropout不同,Dropout是随机使神经元的输出变为零,而DropPath则是针对网络中的路径进行操作。

DropPath通常只在训练阶段使用,而在验证和测试阶段则不应用。这种做法与Dropout类似,其目的是为了在训练过程中引入噪声,以提高模型的泛化能力。

在训练阶段,DropPath通过随机“丢弃”网络中的某些路径来工作,这意味着在前向传播过程中,这些路径的输出会被设置为零。这样做有助于防止模型过于依赖特定的路径或特征,从而增强了模型的鲁棒性和泛化性能。

然而,在验证和测试阶段,我们希望模型能够利用所有可用的信息来进行预测,因此不应用DropPath。在这些阶段,模型的输出是基于所有路径的贡献,而不是被随机“丢弃”了一些路径的情况。

总结来说,DropPath是一种正则化技术,仅在训练阶段使用,以提高模型的泛化能力。在验证和测试阶段,则不应用DropPath,以确保模型能够充分利用所有路径进行准确的预测。

1.2 Patch Embeding

class PatchEmbed(nn.Module):    """    2D Image to Patch Embedding    """    def __init__(self, img_size=224, patch_size=16, in_c=3, embed_dim=768, norm_layer=None):        super().__init__()        img_size = (img_size, img_size)        patch_size = (patch_size, patch_size)        self.img_size = img_size        self.patch_size = patch_size        # 14*14        self.grid_size = (img_size[0] // patch_size[0], img_size[1] // patch_size[1])        # 196        self.num_patches = self.grid_size[0] * self.grid_size[1]        # 不同的模型emd_dim会变化        self.proj = nn.Conv2d(in_c, embed_dim, kernel_size=patch_size, stride=patch_size)        # 传入nor_mlayer就使用传入的norm_layer,否则就使用Identity不用做任何操作        self.norm = norm_layer(embed_dim) if norm_layer else nn.Identity()
    def forward(self, x):        # 获取图片的大小信息        B, C, H, W = x.shape        # 传入的图片高和宽和预设的不一样就会报错,VIT模型里面输入的图像大小必须是固定的        assert H == self.img_size[0] and W == self.img_size[1], \            f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."
        # flatten: [B, C, H, W] -> [B, C, HW]        # transpose: [B, C, HW] -> [B, HW, C]        # 进过卷积之后从第2维度开始展平,之后再交换1,2维度(为了计算方便)        x = self.proj(x).flatten(2).transpose(1, 2)        x = self.norm(x)        # print(s.shape)        return x

当我们输入一个大小为(1,3,224,224)的图像矩阵到Patch Embeding模块后,输出的结果大小为[1, 196, 768]。这意味着原始的二维图像矩阵已经被转换成了一系列的一维向量。这样的转换是必要的,因为Transformer模型只能处理一维的序列数据。通过这种方式,我们可以将图像数据建模为序列数据,进而利用Transformer的强大能力进行进一步的处理和分析。

1.3.Multi-Head Attention

class Attention(nn.Module):    def __init__(self,                 dim,  # 输入token的dim,768                 num_heads=8, # 8个头,实例化的时候是12个头                 qkv_bias=False,                  qk_scale=None,                 attn_drop_ratio=0.,                 proj_drop_ratio=0.):        super(Attention, self).__init__()        self.num_heads = num_heads        # 计算每个head的dim,直接均分操作。        head_dim = dim // num_heads        # 计算分母,q,k相乘之后要除以一个根号下dk。        self.scale = qk_scale or head_dim ** -0.5        # 直接使用一个全连接实现q,k,v。        self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)        self.attn_drop = nn.Dropout(attn_drop_ratio)        # 多头拼接之后通过W进行映射,跟上面的q,k,v一样,也是通过全连接实现。        self.proj = nn.Linear(dim, dim)        self.proj_drop = nn.Dropout(proj_drop_ratio)
    def forward(self, x):        # [batch_size, num_patches + 1, total_embed_dim],即(B,197,768)        B, N, C = x.shape
        # qkv(): -> [batch_size, num_patches + 1, 3 * total_embed_dim]        # reshape: -> [batch_size, num_patches + 1, 3, num_heads, embed_dim_per_head],即(B,197,3,12,64)        # permute: -> [3, batch_size, num_heads, num_patches + 1, embed_dim_per_head],即(3,B,12,197,64)        # C // self.num_heads:每个head的q,k,v对应的维度        # Linear函数可以接收多维的矩阵输入但是只对最后一维起效果,其他维度不变。permute()函数用于调整维度。        qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)        # [batch_size, num_heads, num_patches + 1, embed_dim_per_head]        # 通过切片获取q,k,v        q, k, v = qkv[0], qkv[1], qkv[2]  # make torchscript happy (cannot use tensor as tuple)
        # transpose: -> [batch_size, num_heads, embed_dim_per_head, num_patches + 1]        # @: multiply -> [batch_size, num_heads, num_patches + 1, num_patches + 1]。@为矩阵乘法,q,k是多维矩阵,只有最后两个维度进行矩阵乘法。        # 每个head的q,k进行相乘。输出维度大小为(1,12,197,197)        attn = (q @ k.transpose(-2, -1)) * self.scale        # 在最后一个维度,即每一行进行softmax处理        attn = attn.softmax(dim=-1)        # softmax处理后要经过一个dropout层        attn = self.attn_drop(attn)
        # @: multiply -> [batch_size, num_heads, num_patches + 1, embed_dim_per_head]        # transpose: -> [batch_size, num_patches + 1, num_heads, embed_dim_per_head]        # reshape: -> [batch_size, num_patches + 1, total_embed_dim]        # q,k矩阵相乘的结果要和v相乘得到一个加权结果,输出维度为(1,12,197,64),然后交换1,2维度,再进行reshape操作,其实这个reshape操作就是对多头的拼接,得到最后的输出shape为(1,197,768)        x = (attn @ v).transpose(1, 2).reshape(B, N, C)        # 经过Woy映射,也就是一个全连接层        x = self.proj(x)        # 经过一个dropout层。一般全连接后面都跟一个dropout层        x = self.proj_drop(x)        return x

Vision Transformer (ViT) 的核心确实是 Transformer 架构中的注意力机制。具体来说,ViT 使用了多头自注意力 (Multi-Head Self-Attention, MHSA) 机制,这是 Transformer 模型中最关键的部分之一。

在 ViT 中,图像被分割成一系列的 patches,并且每个 patch 被展平并映射到一个高维空间中。然后,这些 patches 被视为序列输入到 Transformer 中。多头自注意力机制能够捕捉到这些 patches 之间的复杂关系,从而实现对图像的有效建模。

注意力机制允许模型在处理输入序列时,关注到最重要的部分,而多头自注意力则通过多个独立的注意力头来同时关注不同的特征子空间,提高了模型的表达能力。每个注意力头都会计算键 (keys)、查询 (queries) 和值 (values),并根据它们之间的相似性分配权重,从而生成加权和作为输出。

1.4.MLP

除了注意力机制之外,Transformer 还包括了前馈神经网络 (Feed-Forward Networks, FFN) 和层归一化 (Layer Normalization) 等组件,这些组件共同作用,使得 ViT 能够在图像识别和其他计算机视觉任务中表现出色。

图片

class Mlp(nn.Module):    """    MLP as used in Vision Transformer, MLP-Mixer and related networks    """
    def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):        super().__init__()        out_features = out_features or in_features        hidden_features = hidden_features or in_features        self.fc1 = nn.Linear(in_features, hidden_features)        self.act = act_layer()        self.fc2 = nn.Linear(hidden_features, out_features)        self.drop = nn.Dropout(drop)
    def forward(self, x):        x = self.fc1(x)        x = self.act(x)        x = self.drop(x)        x = self.fc2(x)        x = self.drop(x)        return x

1.5. Encoder Block

上面我们简单浏览了MLP的代码实现,由于其相对简单,我们不再做过多解释。接下来,我们将重点分析“Encodef Block”这一模块,它是构建Transformer Encoder的关键单元。ViT Base通过将Block模块重复堆叠12次,我们便能得到完整的Encoder结构。下面是Block模块的详细内容:

图片

class Block(nn.Module):    def __init__(self,                 dim,                 num_heads, # 第一个全连接的倍率                 mlp_ratio=4.,                 qkv_bias=False,                 qk_scale=None,                 drop_ratio=0., # 对应multi-head attention最后全连接层的失活比例                 attn_drop_ratio=0.,# q,k矩阵相乘之后通过softmax之后的全连接层的失活比例                 drop_path_ratio=0., # 框图中Droppath失活比例。也可以使用dropout,没啥影响                 act_layer=nn.GELU,                 norm_layer=nn.LayerNorm):        super(Block, self).__init__()        # layernorm层        self.norm1 = norm_layer(dim)        # 实例化上面讲的Attention类        self.attn = Attention(dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale,                              attn_drop_ratio=attn_drop_ratio, proj_drop_ratio=drop_ratio)        # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here        # 失活比例大于0就实例化DropPath,否者不做任何操作        self.drop_path = DropPath(drop_path_ratio) if drop_path_ratio > 0. else nn.Identity()        # 第二个layernorm层        self.norm2 = norm_layer(dim)        # 第一个全连接之后输出维度翻四倍        mlp_hidden_dim = int(dim * mlp_ratio)        # 实例化mlp        self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop_ratio)
    def forward(self, x):        x = x + self.drop_path(self.attn(self.norm1(x)))        x = x + self.drop_path(self.mlp(self.norm2(x)))        return x

1.6.VisionTransformer

在完成了所有必要模块的创建之后,我们现在要做的就是将它们组合起来,构建我们的VisionTransformer模型。以下是将这些模块整合在一起的代码实现:

class VisionTransformer(nn.Module):    def __init__(self, img_size=224, patch_size=16, in_c=3, num_classes=1000,                 embed_dim=768, depth=12, num_heads=12, mlp_ratio=4.0, qkv_bias=True,                 qk_scale=None, representation_size=None, distilled=False, drop_ratio=0.,                 attn_drop_ratio=0., drop_path_ratio=0., embed_layer=PatchEmbed, norm_layer=None,                 act_layer=None):        """        Args:            img_size (int, tuple): input image size            patch_size (int, tuple): patch size            in_c (int): number of input channels            num_classes (int): number of classes for classification head            embed_dim (int): embedding dimension            depth (int): depth of transformer,就是我们上面的Block堆叠多少次            num_heads (int): number of attention heads            mlp_ratio (int): ratio of mlp hidden dim to embedding dim            qkv_bias (bool): enable bias for qkv if True            qk_scale (float): override default qk scale of head_dim ** -0.5 if set            representation_size (Optional[int]): enable and set representation layer (pre-logits) to this value if set。对应的是最后的MLP中的pre-logits中的全连接层的节点个数。默认是none,也就是不会去构建MLP中的pre-logits,mlp中只有一个全连接层。            distilled (bool): model includes a distillation token and head as in DeiT models。为了兼容Deit模型,不用管。            drop_ratio (float): dropout rate            attn_drop_ratio (float): attention dropout rate            drop_path_ratio (float): stochastic depth rate            embed_layer (nn.Module): patch embedding layer            norm_layer: (nn.Module): normalization layer        """        super(VisionTransformer, self).__init__()        self.num_classes = num_classes        self.num_features = self.embed_dim = embed_dim  # num_features for consistency with other models        # 不用管distilled,所有self.num_tokens=1        self.num_tokens = 2 if distilled else 1        #norm_layer默认为none,所有norm_layer=nn.LayerNorm,用partial方法给一个默认参数。partial 函数的功能就是:把一个函数的某些参数给固定住,返回一个新的函数。        norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6)        # act_layer默认等于GELU函数        act_layer = act_layer or nn.GELU    # 通过embed_layer构建PatchEmbed        self.patch_embed = embed_layer(img_size=img_size, patch_size=patch_size, in_c=in_c, embed_dim=embed_dim)        # 获得num_patches的总个数196        num_patches = self.patch_embed.num_patches    # 创建一个cls_token,形状为(1,768),直接通过0矩阵进行初始化,后面在训练学习。下面要和num_patches进行拼接,加上一个类别向量,从而变成(197,768)        self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim))        # 不用管,用不到,因为distilled默认为none        self.dist_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) if distilled else None        # 创建一个位置编码,形状为(197,768)        self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + self.num_tokens, embed_dim))        # 此处的dropout为加上位置编码后的dropout层        self.pos_drop = nn.Dropout(p=drop_ratio)    # 根据传入的drop_path_ratio构建一个等差序列,总共depth个元素,即在每个Encoder Block中的失活比例都不一样。默认为0,可以传入参数改变。        dpr = [x.item() for x in torch.linspace(0, drop_path_ratio, depth)]  # stochastic depth decay rule        # 构建blocks,首先通过列表创建depth次,也就是12次。然后通过nn.Sequential方法把列表中的所有元素打包成整体赋值给self.blocks。        self.blocks = nn.Sequential(*[            Block(dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,                  drop_ratio=drop_ratio, attn_drop_ratio=attn_drop_ratio, drop_path_ratio=dpr[i],                  norm_layer=norm_layer, act_layer=act_layer)            for i in range(depth)        ])        # 通过norm_layer层        self.norm = norm_layer(embed_dim)
        # distilled不用管,只用看representation_size即可,如果有传入representation_size,在MLP中就会构建pre-logits。否者直接 self.has_logits = False,然后执行self.pre_logits = nn.Identity(),相当于没有pre-logits。        if representation_size and not distilled:            self.has_logits = True            self.num_features = representation_size            self.pre_logits = nn.Sequential(OrderedDict([                ("fc", nn.Linear(embed_dim, representation_size)),                ("act", nn.Tanh())            ]))        else:            self.has_logits = False            self.pre_logits = nn.Identity()
        # 整个网络的最后一层全连接层,输出就是分类类别个数,前提要num_classes > 0        self.head = nn.Linear(self.num_features, num_classes) if num_classes > 0 else nn.Identity()        self.head_dist = None        if distilled:            self.head_dist = nn.Linear(self.embed_dim, self.num_classes) if num_classes > 0 else nn.Identity()
        # 权重初始化        nn.init.trunc_normal_(self.pos_embed, std=0.02)        if self.dist_token is not None:            nn.init.trunc_normal_(self.dist_token, std=0.02)
        nn.init.trunc_normal_(self.cls_token, std=0.02)        self.apply(_init_vit_weights)
    def forward_features(self, x):        # [B, C, H, W] -> [B, num_patches, embed_dim]        # 先进性patch_embeding处理        x = self.patch_embed(x)  # [B, 196, 768]        # [1, 1, 768] -> [B, 1, 768]        # 对cls_token在batch维度进行复制batch_size份        cls_token = self.cls_token.expand(x.shape[0], -1, -1)        # self.dist_token默认为none。        if self.dist_token is None:          # 在dim=1的维度上进行拼接,输出shape:[B, 197, 768]            x = torch.cat((cls_token, x), dim=1)  # [B, 197, 768]        else:            x = torch.cat((cls_token, self.dist_token.expand(x.shape[0], -1, -1), x), dim=1)    # 位置编码后有个dropout层        x = self.pos_drop(x + self.pos_embed)        # 通过我们刚才构建好的blocks层。        x = self.blocks(x)        # 再通过一个normlayer层        x = self.norm(x)        # 提取clc_token对应的输出,也就是提取出类别向量。        if self.dist_token is None:          # 返回所有的batch维度和第二个维度上面索引为0的数据            return self.pre_logits(x[:, 0])        else:            return x[:, 0], x[:, 1]
    def forward(self, x):      # 首先将x传给forward_features()函数,输出shape为(1,768)        x = self.forward_features(x)        # self.head_dist默认为none,自动执行else后面的语句        if self.head_dist is not None:            x, x_dist = self.head(x[0]), self.head_dist(x[1])            if self.training and not torch.jit.is_scripting():                # during inference, return the average of both classifier predictions                return x, x_dist            else:                return (x + x_dist) / 2        else:          # 输出特征大小为(1,1000),对应1000分类            x = self.head(x)        return x

2. 构建VIT模型

虽然我们已经完成了VisionTransformer的所有代码分析和搭建过程,但为了让模型更加易于使用和调用,我们还需要对其进行进一步的封装。下面是封装后的代码实现:

def vit_base_patch16_224_in21k(num_classes: int = 21843, has_logits: bool = True):  # num_classes表示分类类别个数,因为原始代码是在ImageNet21k上预训练的,所以这里为21843.  # has_logits:是否有这个模块,预训练的时候是有的,不加也可以,这样就只剩全连接层了    """    ViT-Base model (ViT-B/16) from original paper (https://arxiv.org/abs/2010.11929).    ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.    weights ported from official Google JAX impl:    https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_patch16_224_in21k-e5005f0a.pth    """    # 创建模型    # img_size:图像尺寸    # patch_size:图像切块大小    # 图像切块后每个小块的向量维度大小,即送入multi-head attention模块的向量长度    # depth:block模块的堆叠次数    # num_heads:采用几头注意力    # representation_size:pre-logits里面用的    # num_classes:分类类别数    model = VisionTransformer(img_size=224,                              patch_size=16,                              embed_dim=768,                              depth=12,                              num_heads=12,                              representation_size=768 if has_logits else None,                              num_classes=num_classes)    return model

def vit_base_patch32_224_in21k(num_classes: int = 21843, has_logits: bool = True):    """    ViT-Base model (ViT-B/32) from original paper (https://arxiv.org/abs/2010.11929).    ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.    weights ported from official Google JAX impl:    https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_patch32_224_in21k-8db57226.pth    """    model = VisionTransformer(img_size=224,                              patch_size=32,                              embed_dim=768,                              depth=12,                              num_heads=12,                              representation_size=768 if has_logits else None,                              num_classes=num_classes)    return model

def vit_large_patch16_224_in21k(num_classes: int = 21843, has_logits: bool = True):    """    ViT-Large model (ViT-L/16) from original paper (https://arxiv.org/abs/2010.11929).    ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.    weights ported from official Google JAX impl:    https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_patch16_224_in21k-606da67d.pth    """    model = VisionTransformer(img_size=224,                              patch_size=16,                              embed_dim=1024,                              depth=24,                              num_heads=16,                              representation_size=1024 if has_logits else None,                              num_classes=num_classes)    return model

def vit_large_patch32_224_in21k(num_classes: int = 21843, has_logits: bool = True):    """    ViT-Large model (ViT-L/32) from original paper (https://arxiv.org/abs/2010.11929).    ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.    weights ported from official Google JAX impl:    https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_patch32_224_in21k-9046d2e7.pth    """    model = VisionTransformer(img_size=224,                              patch_size=32,                              embed_dim=1024,                              depth=24,                              num_heads=16,                              representation_size=1024 if has_logits else None,                              num_classes=num_classes)    return model

def vit_huge_patch14_224_in21k(num_classes: int = 21843, has_logits: bool = True):    """    ViT-Huge model (ViT-H/14) from original paper (https://arxiv.org/abs/2010.11929).    ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.    NOTE: converted weights not currently available, too large for github release hosting.    """    model = VisionTransformer(img_size=224,                              patch_size=14,                              embed_dim=1280,                              depth=32,                              num_heads=16,                              representation_size=1280 if has_logits else None,                              num_classes=num_classes)    return model
# 模型测试if __name__ == "__main__":    model = vit_base_patch16_224_in21k(num_classes=1000, has_logits=False)    data = torch.rand(1, 3, 224, 224)    out = model(data)    # print(out.shape)

从上面的代码可以看到,总共5个模型,从上到下复杂的依次递增,上面介绍了一vit_base_patch16_224_in21k这个模型的创建配置参数,其他模型的参数大同小异。

3. 完整代码

"""original code from rwightman:https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py"""from functools import partialfrom collections import OrderedDict
import torchimport torch.nn as nn

def drop_path(x, drop_prob: float = 0., training: bool = False):    """    Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).    This is the same as the DropConnect impl I created for EfficientNet, etc networks, however,    the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper...    See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for    changing the layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use    'survival rate' as the argument.    """    if drop_prob == 0. or not training:        return x    keep_prob = 1 - drop_prob    shape = (x.shape[0],) + (1,) * (x.ndim - 1)  # work with diff dim tensors, not just 2D ConvNets
    random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device)    random_tensor.floor_()  # binarize    output = x.div(keep_prob) * random_tensor    return output

class DropPath(nn.Module):    """    Drop paths (Stochastic Depth) per sample  (when applied in main path of residual blocks).    """
    def __init__(self, drop_prob=None):        super(DropPath, self).__init__()        self.drop_prob = drop_prob
    def forward(self, x):        return drop_path(x, self.drop_prob, self.training)

class PatchEmbed(nn.Module):    """    2D Image to Patch Embedding    """
    def __init__(self, img_size=224, patch_size=16, in_c=3, embed_dim=768, norm_layer=None):        super().__init__()        # (224,224)        img_size = (img_size, img_size)        # (16,16)        patch_size = (patch_size, patch_size)        self.img_size = img_size        self.patch_size = patch_size        # (14,14)        self.grid_size = (img_size[0] // patch_size[0], img_size[1] // patch_size[1])        # 196        self.num_patches = self.grid_size[0] * self.grid_size[1]
        self.proj = nn.Conv2d(in_c, embed_dim, kernel_size=patch_size, stride=patch_size)        self.norm = norm_layer(embed_dim) if norm_layer else nn.Identity()
    def forward(self, x):        B, C, H, W = x.shape        assert H == self.img_size[0] and W == self.img_size[1], \            f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."
        # flatten: [B, C, H, W] -> [B, C, HW]        # transpose: [B, C, HW] -> [B, HW, C]        x = self.proj(x).flatten(2).transpose(1, 2)        x = self.norm(x)        # print(x.shape)        return x

class Attention(nn.Module):    def __init__(self,                 dim,  # ??token?dim                 num_heads=8,                 qkv_bias=False,                 qk_scale=None,                 attn_drop_ratio=0.,                 proj_drop_ratio=0.):        super(Attention, self).__init__()        self.num_heads = num_heads        head_dim = dim // num_heads        self.scale = qk_scale or head_dim ** -0.5        self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)        self.attn_drop = nn.Dropout(attn_drop_ratio)        self.proj = nn.Linear(dim, dim)        self.proj_drop = nn.Dropout(proj_drop_ratio)
    def forward(self, x):        # [batch_size, num_patches + 1, total_embed_dim]        B, N, C = x.shape
        # qkv(): -> [batch_size, num_patches + 1, 3 * total_embed_dim]        # reshape: -> [batch_size, num_patches + 1, 3, num_heads, embed_dim_per_head]        # permute: -> [3, batch_size, num_heads, num_patches + 1, embed_dim_per_head]        qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)        # [batch_size, num_heads, num_patches + 1, embed_dim_per_head]        q, k, v = qkv[0], qkv[1], qkv[2]  # make torchscript happy (cannot use tensor as tuple)        # transpose: -> [batch_size, num_heads, embed_dim_per_head, num_patches + 1]        # @: multiply -> [batch_size, num_heads, num_patches + 1, num_patches + 1]        attn = (q @ k.transpose(-2, -1)) * self.scale        attn = attn.softmax(dim=-1)        attn = self.attn_drop(attn)
        # @: multiply -> [batch_size, num_heads, num_patches + 1, embed_dim_per_head]        # transpose: -> [batch_size, num_patches + 1, num_heads, embed_dim_per_head]        # reshape: -> [batch_size, num_patches + 1, total_embed_dim]        # print((attn @ v).shape)        x = (attn @ v).transpose(1, 2).reshape(B, N, C)        # print(x.shape)        x = self.proj(x)        x = self.proj_drop(x)        return x

class Mlp(nn.Module):    """    MLP as used in Vision Transformer, MLP-Mixer and related networks    """
    def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):        super().__init__()        out_features = out_features or in_features        hidden_features = hidden_features or in_features        self.fc1 = nn.Linear(in_features, hidden_features)        self.act = act_layer()        self.fc2 = nn.Linear(hidden_features, out_features)        self.drop = nn.Dropout(drop)
    def forward(self, x):        x = self.fc1(x)        x = self.act(x)        x = self.drop(x)        x = self.fc2(x)        x = self.drop(x)        return x

class Block(nn.Module):    def __init__(self,                 dim,                 num_heads,                 mlp_ratio=4.,                 qkv_bias=False,                 qk_scale=None,                 drop_ratio=0.,                 attn_drop_ratio=0.,                 drop_path_ratio=0.,                 act_layer=nn.GELU,                 norm_layer=nn.LayerNorm):        super(Block, self).__init__()        self.norm1 = norm_layer(dim)        self.attn = Attention(dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale,                              attn_drop_ratio=attn_drop_ratio, proj_drop_ratio=drop_ratio)        # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here        self.drop_path = DropPath(drop_path_ratio) if drop_path_ratio > 0. else nn.Identity()        self.norm2 = norm_layer(dim)        mlp_hidden_dim = int(dim * mlp_ratio)        self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop_ratio)
    def forward(self, x):        x = x + self.drop_path(self.attn(self.norm1(x)))        x = x + self.drop_path(self.mlp(self.norm2(x)))        return x

class VisionTransformer(nn.Module):    def __init__(self, img_size=224, patch_size=16, in_c=3, num_classes=1000,                 embed_dim=768, depth=12, num_heads=12, mlp_ratio=4.0, qkv_bias=True,                 qk_scale=None, representation_size=None, distilled=False, drop_ratio=0.,                 attn_drop_ratio=0., drop_path_ratio=0., embed_layer=PatchEmbed, norm_layer=None,                 act_layer=None):        """        Args:            img_size (int, tuple): input image size            patch_size (int, tuple): patch size            in_c (int): number of input channels            num_classes (int): number of classes for classification head            embed_dim (int): embedding dimension            depth (int): depth of transformer            num_heads (int): number of attention heads            mlp_ratio (int): ratio of mlp hidden dim to embedding dim            qkv_bias (bool): enable bias for qkv if True            qk_scale (float): override default qk scale of head_dim ** -0.5 if set            representation_size (Optional[int]): enable and set representation layer (pre-logits) to this value if set            distilled (bool): model includes a distillation token and head as in DeiT models            drop_ratio (float): dropout rate            attn_drop_ratio (float): attention dropout rate            drop_path_ratio (float): stochastic depth rate            embed_layer (nn.Module): patch embedding layer            norm_layer: (nn.Module): normalization layer        """        super(VisionTransformer, self).__init__()        self.num_classes = num_classes        self.num_features = self.embed_dim = embed_dim  # num_features for consistency with other models        self.num_tokens = 2 if distilled else 1        norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6)        act_layer = act_layer or nn.GELU
        self.patch_embed = embed_layer(img_size=img_size, patch_size=patch_size, in_c=in_c, embed_dim=embed_dim)        num_patches = self.patch_embed.num_patches
        self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim))        self.dist_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) if distilled else None        self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + self.num_tokens, embed_dim))        self.pos_drop = nn.Dropout(p=drop_ratio)
        dpr = [x.item() for x in torch.linspace(0, drop_path_ratio, depth)]  # stochastic depth decay rule        self.blocks = nn.Sequential(*[            Block(dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,                  drop_ratio=drop_ratio, attn_drop_ratio=attn_drop_ratio, drop_path_ratio=dpr[i],                  norm_layer=norm_layer, act_layer=act_layer)            for i in range(depth)        ])        self.norm = norm_layer(embed_dim)
        # Representation layer        if representation_size and not distilled:            self.has_logits = True            self.num_features = representation_size            self.pre_logits = nn.Sequential(OrderedDict([                ("fc", nn.Linear(embed_dim, representation_size)),                ("act", nn.Tanh())            ]))        else:            self.has_logits = False            self.pre_logits = nn.Identity()
        # Classifier head(s)        self.head = nn.Linear(self.num_features, num_classes) if num_classes > 0 else nn.Identity()        self.head_dist = None        if distilled:            self.head_dist = nn.Linear(self.embed_dim, self.num_classes) if num_classes > 0 else nn.Identity()
        # Weight init        nn.init.trunc_normal_(self.pos_embed, std=0.02)        if self.dist_token is not None:            nn.init.trunc_normal_(self.dist_token, std=0.02)
        nn.init.trunc_normal_(self.cls_token, std=0.02)        self.apply(_init_vit_weights)
    def forward_features(self, x):        # [B, C, H, W] -> [B, num_patches, embed_dim]        x = self.patch_embed(x)  # [B, 196, 768]        # [1, 1, 768] -> [B, 1, 768]        cls_token = self.cls_token.expand(x.shape[0], -1, -1)        if self.dist_token is None:            x = torch.cat((cls_token, x), dim=1)  # [B, 197, 768]        else:            x = torch.cat((cls_token, self.dist_token.expand(x.shape[0], -1, -1), x), dim=1)
        x = self.pos_drop(x + self.pos_embed)        x = self.blocks(x)        x = self.norm(x)        if self.dist_token is None:            return self.pre_logits(x[:, 0])        else:            return x[:, 0], x[:, 1]
    def forward(self, x):        x = self.forward_features(x)        if self.head_dist is not None:            x, x_dist = self.head(x[0]), self.head_dist(x[1])            if self.training and not torch.jit.is_scripting():                # during inference, return the average of both classifier predictions                return x, x_dist            else:                return (x + x_dist) / 2        else:            x = self.head(x)            print(x.shape)
        return x

def _init_vit_weights(m):    """    ViT weight initialization    :param m: module    """    if isinstance(m, nn.Linear):        nn.init.trunc_normal_(m.weight, std=.01)        if m.bias is not None:            nn.init.zeros_(m.bias)    elif isinstance(m, nn.Conv2d):        nn.init.kaiming_normal_(m.weight, mode="fan_out")        if m.bias is not None:            nn.init.zeros_(m.bias)    elif isinstance(m, nn.LayerNorm):        nn.init.zeros_(m.bias)        nn.init.ones_(m.weight)

def vit_base_patch16_224_in21k(num_classes: int = 21843, has_logits: bool = True):    """    ViT-Base model (ViT-B/16) from original paper (https://arxiv.org/abs/2010.11929).    ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.    weights ported from official Google JAX impl:    https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_patch16_224_in21k-e5005f0a.pth    """    model = VisionTransformer(img_size=224,                              patch_size=16,                              embed_dim=768,                              depth=12,                              num_heads=12,                              representation_size=768 if has_logits else None,                              num_classes=num_classes)    return model

def vit_base_patch32_224_in21k(num_classes: int = 21843, has_logits: bool = True):    """    ViT-Base model (ViT-B/32) from original paper (https://arxiv.org/abs/2010.11929).    ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.    weights ported from official Google JAX impl:    https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_patch32_224_in21k-8db57226.pth    """    model = VisionTransformer(img_size=224,                              patch_size=32,                              embed_dim=768,                              depth=12,                              num_heads=12,                              representation_size=768 if has_logits else None,                              num_classes=num_classes)    return model

def vit_large_patch16_224_in21k(num_classes: int = 21843, has_logits: bool = True):    """    ViT-Large model (ViT-L/16) from original paper (https://arxiv.org/abs/2010.11929).    ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.    weights ported from official Google JAX impl:    https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_patch16_224_in21k-606da67d.pth    """    model = VisionTransformer(img_size=224,                              patch_size=16,                              embed_dim=1024,                              depth=24,                              num_heads=16,                              representation_size=1024 if has_logits else None,                              num_classes=num_classes)    return model

def vit_large_patch32_224_in21k(num_classes: int = 21843, has_logits: bool = True):    """    ViT-Large model (ViT-L/32) from original paper (https://arxiv.org/abs/2010.11929).    ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.    weights ported from official Google JAX impl:    https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_patch32_224_in21k-9046d2e7.pth    """    model = VisionTransformer(img_size=224,                              patch_size=32,                              embed_dim=1024,                              depth=24,                              num_heads=16,                              representation_size=1024 if has_logits else None,                              num_classes=num_classes)    return model

def vit_huge_patch14_224_in21k(num_classes: int = 21843, has_logits: bool = True):    """    ViT-Huge model (ViT-H/14) from original paper (https://arxiv.org/abs/2010.11929).    ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.    NOTE: converted weights not currently available, too large for github release hosting.    """    model = VisionTransformer(img_size=224,                              patch_size=14,                              embed_dim=1280,                              depth=32,                              num_heads=16,                              representation_size=1280 if has_logits else None,                              num_classes=num_classes)    return model

if __name__ == "__main__":    model = vit_base_patch16_224_in21k(num_classes=5, has_logits=False)    data = torch.rand(1, 3, 224, 224)    out = model(data)    # print(out.shape)

至此,我们已经全面分析了VIT框架的代码实现。非常欢迎各位专家提出宝贵的意见和建议。

  • 23
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
这段代码是用于实现Vision Transformer框架的一部分功能,具体逐行解析如下: 1. `conv_output = F.conv2d(image, kernel, stride=stride)`: 这一行代码使用PyTorch中的卷积函数`F.conv2d`来对输入图像进行卷积操作。 2. `bs, oc, oh, ow = conv_output.shape`: 这一行代码通过`conv_output.shape`获取卷积输出张量的形状信息,其中`bs`表示批次大小,`oc`表示输出通道数,`oh`和`ow`分别表示输出张量的高度和宽度。 3. `patch_embedding = conv_output.reshape((bs, oc, oh*ow))`: 这一行代码通过`reshape`函数将卷积输出张量进行形状变换,将其转换为形状为`(bs, oc, oh*ow)`的张量。 4. `patch_embedding = patch_embedding.transpose(-1, -2)`: 这一行代码使用`transpose`函数交换张量的最后两个维度,将形状为`(bs, oh*ow, oc)`的张量转换为`(bs, oc, oh*ow)`的张量。 5. `weight = weight.transpose(0, 1)`: 这一行代码将权重张量进行转置操作,交换第0维和第1维的位置。 6. `kernel = weight.reshape((-1, ic, patch_size, patch_size))`: 这一行代码通过`reshape`函数将权重张量进行形状变换,将其转换为形状为`(outchannel*inchannel, ic, patch_size, patch_size)`的张量。 7. `patch_embedding_conv = image2emb_conv(image, kernel, patch_size)`: 这一行代码调用了`image2emb_conv`函数,并传入了图像、权重张量和补丁大小作为参数。 8. `print(patch_embedding_conv.shape)`: 这一行代码打印了`patch_embedding_conv`的形状信息。 以上是对Vision Transformer代码的逐行解析
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

AI生成曾小健

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值