【Up-Sampling】《CARAFE:Content-Aware ReAssembly of FEatures》含代码实现

 

链接是 讲述carafe 我看过的,比较 有见解的一篇。先看

https://blog.csdn.net/bryant_meng/article/details/104341591?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522158751879219724848331608%2522%252C%2522scm%2522%253A%252220140713.130102334.pc%255Fall.%2522%257D&request_id=158751879219724848331608&biz_id=0&utm_source=distribute.pc_search_result.none-task-blog-2~all~first_rank_v2~rank_v25-5

(下面carafe代码忘记从哪篇文章摘取的,自己手动改了一些符号 以对应论文 里名称)

在看下面代码前,先理解卷积的 手动实现 看 《pytorch手动实现滑动窗口操作,论fold和unfold函数的使用》,分别讲述了 torch和tensorflow的实现方式,主要是提取 滑动窗口。

最下面carafe代码有两个难懂的点,一个是unfold函数,作用就是提取 滑动窗口,函数fold 从滑动卷积后 结果reshape成 输出形状;

#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time    : 2020/4/19 14:15
# @Author  : ZZL
# @File    : fold.py

import torch
# Convolution is equivalent with Unfold + Matrix Multiplication + Fold (or view to output shape)

# inp = torch.reshape(torch.arange(0, 60), [1,3,4,5])
inp = torch.randn(1, 3, 10, 12)
w = torch.randn(2, 3, 4, 5)  # 2,3,2,3
inp_unf = torch.nn.functional.unfold(inp, (4, 5)) # shape = 1,60,56  dimension: _int, size: _int, step: _int
# inp_unf = torch.nn.functional.unfold(inp, (2, 3))
out_unf = inp_unf.transpose(1, 2).matmul(w.view(w.size(0), -1).t()).transpose(1, 2)  # 1,56,60 1,60,2 = torch.Size([1, 2, 56]) 与卷积核相乘
# out_unf = inp_unf.transpose(1, 2).matmul(w.view(w.size(0), -1).t()).transpose(1, 2)
out = torch.nn.functional.fold(out_unf, output_size=(7, 8), kernel_size=(1, 1))  # 1,2,7,8  输入:N,C×∏(kernel_size)=4*5*3= 60,L = 10-4+1 * 12-5+1=7*8 = 56
# out = torch.nn.functional.fold(out_unf, (3, 3), (1, 1))

# or equivalently (and avoiding a copy),
# out = out_unf.view(1, 2, 7, 8)
(torch.nn.functional.conv2d(inp, w) - out).abs().max()
# tensor(1.9073e-06)

一个是pixel_shuffle,实验了一下,这个函数就是 下图中的 特殊的 reshape 操作,

pixel_shuffle 代码

import torch
import torch.nn as nn
import torch.nn.functional as F

kernel_tensor = torch.reshape(torch.arange(0, 36*4), [1,3*3*2*2, 2, 2])  # torch.Size([1, 36, 2, 2])  3*3*2*2
kernel_tensor2 = F.pixel_shuffle(kernel_tensor, 2)  # torch.Size([1, 9, 4, 4])

# kernel_tensor[:, :,0,0].reshape(9,2,2)[:,0,0] ==  kernel_tensor2[:,0:9, 0,0]

carafe代码 

 注意 用了 fold函数,也用了两次 unfold

# -*- coding: utf-8 -*-
"""
Created on Sun Apr 19 12:35:40 2020

@author: ZZL
"""

import torch
import torch.nn as nn
import torch.nn.functional as F

class CARAFE(nn.Module):
    def __init__(self, inC, outC, Kencoder=3, delta=2, Kup=5, Cm=64): # Kup = Kencoder + 2
        super(CARAFE, self).__init__()
        self.Kencoder = Kencoder
        self.delta = delta
        self.Kup = Kup
        self.down = nn.Conv2d(in_channels=inC, out_channels=Cm, kernel_size=1)  #
        self.encoder = nn.Conv2d(64, self.delta ** 2 * self.Kup ** 2,
                                 self.Kencoder, 1, self.Kencoder// 2)
        self.out = nn.Conv2d(inC, outC, 1)

    def forward(self, in_tensor):
        N, C, H, W = in_tensor.size()

        # N,C,H,W -> N,C,delta*H,delta*W
        # kernel prediction module
        kernel_tensor = self.down(in_tensor)  # (N, Cm, H, W)
        kernel_tensor = self.encoder(kernel_tensor)  # (N, delta^2 * Kup^2, H, W)
        # 下面这步 就是 特殊的reshape,在另一篇中 实验了
        kernel_tensor = F.pixel_shuffle(kernel_tensor, self.delta)  # (N, delta^2 * Kup^2, H, W)->(N, Kup^2, delta*H, delta*W)
        kernel_tensor = F.softmax(kernel_tensor, dim=1)  # (N, Kup^2, delta*H, delta*W)
        kernel_tensor = kernel_tensor.unfold(2, self.delta, step=self.delta) # (N, Kup^2, H, W*delta, delta)
        kernel_tensor = kernel_tensor.unfold(3, self.delta, step=self.delta) # (N, Kup^2, H, W, delta, delta)
        kernel_tensor = kernel_tensor.reshape(N, self.Kup ** 2, H, W, self.delta ** 2) # (N, Kup^2, H, W, delta^2)
        kernel_tensor = kernel_tensor.permute(0, 2, 3, 1, 4)  # (N, H, W, Kup^2, delta^2)

        # content-aware reassembly module
        # tensor.unfold: dim, size, step
        in_tensor = F.pad(in_tensor, pad=(self.Kup // 2, self.Kup // 2,
                                          self.Kup // 2, self.Kup // 2),
                          mode='constant', value=0) # (N, C, H+Kup//2+Kup//2, W+Kup//2+Kup//2)
        in_tensor = in_tensor.unfold(dimension=2, size=self.Kup, step=1) # (N, C, H, W+Kup//2+Kup//2, Kup)
        in_tensor = in_tensor.unfold(3, self.Kup, step=1) # (N, C, H, W, Kup, Kup)
        in_tensor = in_tensor.reshape(N, C, H, W, -1) # (N, C, H, W, Kup^2)
        in_tensor = in_tensor.permute(0, 2, 3, 1, 4)  # (N, H, W, C, Kup^2)

        out_tensor = torch.matmul(in_tensor, kernel_tensor)  # (N, H, W, C, delta^2)
        out_tensor = out_tensor.reshape(N, H, W, -1)
        out_tensor = out_tensor.permute(0, 3, 1, 2)
        out_tensor = F.pixel_shuffle(out_tensor, self.delta)  # 4,160,20,20
        out_tensor = self.out(out_tensor)
        return out_tensor


if __name__ == '__main__':
    data = torch.rand(4, 160, 10, 10)
    carafe = CARAFE(160, 10)
    print(carafe(data).size())

 

  • 2
    点赞
  • 15
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值