EDT:针对Low-Level视觉任务的一种高效Transformer基准框架

EDT:On Efficient Transformer-Based Image Pre-training for Low-Level Vision

论文地址:On Efficient Transformer-Based Image Pre-training for Low-Level Vision

代码地址fenglinglwb/EDT: On Efficient Transformer-Based Image Pre-training for Low-Level Vision

现阶段问题

​ 预训练在high-level计算机视觉中产生了许多最先进的技术,但很少有人尝试研究预训练如何在low-level任务中。

主要贡献

  1. 提出了一个用于低级视觉的高效且通用的Transformer框架:改进window attention,分别从高、宽进行切块计算注意力。

  2. 是第一个对低级视觉的图像预训练进行深入研究的人,揭示了预训练如何影响模型的内部表示以及如何进行有效的预训练的见解

网络框架

2023-11-29_20-15-27

Shifted Crossed Local Attention

CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows基础上进行改进。
X = [ X 1 , X 2 ] ,   w h e r e   X 1 , X 2 ∈ R ( H × W ) × C / 2 , X 1 ′ = H − M S A ( X 1 ) , X 2 ′ = V − M S A ( X 2 ) , ( S ) C L − M S A ( X ) = P r o j ( [ X 1 ′ , X 2 ′ ] ) , \begin{aligned} &\mathbf{X}=[\mathbf{X}_{1},\mathbf{X}_{2}],\mathrm{~where~}\mathbf{X}_{1},\mathbf{X}_{2}\in\mathbb{R}^{(H\times W)\times^{C/2}}, \\ &\mathbf{X}_{1}^{'}=\mathrm{H-MSA}(\mathbf{X}_{1}), \\ &\mathbf{X}_{2}^{^{\prime}}=\mathrm{V-MSA}(\mathbf{X}_{2}), \\ &(\mathrm{S})\mathrm{CL-MSA}(\mathbf{X})=\mathrm{Proj}([\mathbf{X}_{1}^{'},\mathbf{X}_{2}^{'}]), \end{aligned} X=[X1,X2], where X1,X2R(H×W)×C/2,X1=HMSA(X1),X2=VMSA(X2),(S)CLMSA(X)=Proj([X1,X2]),
2023-11-29_20-18-37

    def calculate_mask(self, x_size, index):
        # calculate attention mask for SW-MSA
        if self.shift_size is None:
            return None

        H, W = x_size
        img_mask = torch.zeros((1, H, W, 1))  # 1 H W 1
        h_window_size, w_window_size = self.window_size[0], self.window_size[1]
        h_shift_size, w_shift_size = self.shift_size[0], self.shift_size[1]
        if index == 1:
            h_window_size, w_window_size = self.window_size[1], self.window_size[0]
            h_shift_size, w_shift_size = self.shift_size[1], self.shift_size[0]

        h_slices = (slice(0, -h_window_size),
                    slice(-h_window_size, -h_shift_size),
                    slice(-h_shift_size, None))
        w_slices = (slice(0, -w_window_size),
                    slice(-w_window_size, -w_shift_size),
                    slice(-w_shift_size, None))
        cnt = 0
        for h in h_slices:
            for w in w_slices:
                img_mask[:, h, w, :] = cnt
                cnt += 1

        mask_windows = window_partition(img_mask, self.window_size, index)  # nW, h_window_size, w_window_size, 1
        mask_windows = mask_windows.view(-1, h_window_size * w_window_size)
        attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
        attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))

        return attn_mask

    x = self.qkv(x) #B H W 3C
    x = x.view(B, H, W, 3, C).permute(3,0,1,2,4).contiguous()#3 B H W C
    x_h = x[...,:C//2]
    x_v = x[...,C//2:]

    if self.shift_size:
        x_h = torch.roll(x_h, shifts=(-self.shift_size[0],-self.shift_size[1]), dims=(2,3))
        x_v = torch.roll(x_v, shifts=(-self.shift_size[1],-self.shift_size[0]), dims=(2,3))

    if self.input_resolution == x_size:
        attn_windows_h = self.attns[0](x_h, mask=self.attn_mask_h)
        attn_windows_v = self.attns[1](x_v, mask=self.attn_mask_v)
    else:
        mask_h = self.calculate_mask(x_size, index=0).to(x_h.device) if self.shift_size else None
        mask_v = self.calculate_mask(x_size, index=1).to(x_v.device) if self.shift_size else None
        attn_windows_h = self.attns[0](x_h, mask=mask_h)
        attn_windows_v = self.attns[1](x_v, mask=mask_v)

    if self.shift_size:
        attn_windows_h = torch.roll(attn_windows_h, shifts=(self.shift_size[0],self.shift_size[1]), dims=(1,2))
        attn_windows_v = torch.roll(attn_windows_v, shifts=(self.shift_size[1],self.shift_size[0]), dims=(1,2))

    attn_windows = torch.cat([attn_windows_h, attn_windows_v], dim=-1)
    attn_windows = self.proj(attn_windows) #B H W C

Anti-Blocking FFN

​ 分组卷积是指将输入和输出通道分为若干组,在每组内部进行卷积操作,这可以加速计算并在一定程度上提高模型的表征能力

self.dwconv = nn.Conv2d(hidden_features, hidden_features, 5, 1, 5//2, groups=hidden_features)

Convolution Block

2023-11-30_11-13-55

  • 21
    点赞
  • 19
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Miracle Fan

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值