面向小白的深度学习代码库,一行代码实现30+中attention机制。

5f77ba0b3c7f6be6e3b9488cbc27e778.jpeg

Hello,大家好,我是小马🚀🚀🚀,最近创建了一个深度学习代码库,欢迎大家来玩呀!代码库地址是https://github.com/xmu-xiaoma666/External-Attention-pytorch,目前实现了将近40个深度学习的常见算法!

For 小白(Like Me):最近在读论文的时候会发现一个问题,有时候论文核心思想非常简单,核心代码可能也就十几行。但是打开作者release的源码时,却发现提出的模块嵌入到分类、检测、分割等任务框架中,导致代码比较冗余,对于特定任务框架不熟悉的我,很难找到核心代码,导致在论文和网络思想的理解上会有一定困难。

For 进阶者(Like You):如果把Conv、FC、RNN这些基本单元看做小的Lego积木,把Transformer、ResNet这些结构看成已经搭好的Lego城堡。那么本项目提供的模块就是一个个具有完整语义信息的Lego组件。让科研工作者们避免反复造轮子,只需思考如何利用这些“Lego组件”,搭建出更多绚烂多彩的作品。

For 大神(May Be Like You):能力有限,不喜轻喷!!!

For All:本项目就是要实现一个既能让深度学习小白也能搞懂,又能服务科研和工业社区的代码库。本项目的宗旨是从代码角度,实现🚀让世界上没有难读的论文🚀。

(同时也非常欢迎各位科研工作者将自己的工作的核心代码整理到本项目中,推动科研社区的发展,会在readme中注明代码的作者~)

Attention Series

1. External Attention Usage

1.1. Paper

"Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks"

1.2. Overview
fcf75af793e04139f085997715ed9930.jpeg
1.3. Usage Code
from model.attention.ExternalAttention  import ExternalAttention
import torch

input=torch.randn( 50, 49, 512)
ea = ExternalAttention(d_model= 512,S= 8)
output=ea(input)
print(output.shape)

2. Self Attention Usage

2.1. Paper

"Attention Is All You Need"

1.2. Overview
295fc20c148d35138f00f1b1800b7c77.jpeg
1.3. Usage Code
from model.attention.SelfAttention  import ScaledDotProductAttention
import torch

input=torch.randn( 50, 49, 512)
sa = ScaledDotProductAttention(d_model= 512, d_k= 512, d_v= 512, h= 8)
output=sa(input,input,input)
print(output.shape)

3. Simplified Self Attention Usage

3.1. Paper

None

3.2. Overview
2d794f26302a161c91e22b0c511d8d4c.jpeg
3.3. Usage Code
from model.attention.SimplifiedSelfAttention  import SimplifiedScaledDotProductAttention
import torch

input=torch.randn( 50, 49, 512)
ssa = SimplifiedScaledDotProductAttention(d_model= 512, h= 8)
output=ssa(input,input,input)
print(output.shape)

4. Squeeze-and-Excitation Attention Usage

4.1. Paper

"Squeeze-and-Excitation Networks"

4.2. Overview
d809f3ee689de8c5ff127d42e0233fd0.jpeg
4.3. Usage Code
from model.attention.SEAttention  import SEAttention
import torch

input=torch.randn( 50, 512, 7, 7)
se = SEAttention(channel= 512,reduction= 8)
output=se(input)
print(output.shape)

5. SK Attention Usage

5.1. Paper

"Selective Kernel Networks"

5.2. Overview
fd03b5cd21f510b3fef0d5cbbf278c40.jpeg
5.3. Usage Code
from model.attention.SKAttention  import SKAttention
import torch

input=torch.randn( 50, 512, 7, 7)
se = SKAttention(channel= 512,reduction= 8)
output=se(input)
print(output.shape)

6. CBAM Attention Usage

6.1. Paper

"CBAM: Convolutional Block Attention Module"

6.2. Overview
fb4a2eaaeb5568f75b5246e2846f5179.jpeg 16a05b8db84d33f045df138c04be9d0f.jpeg
6.3. Usage Code
from model.attention.CBAM  import CBAMBlock
import torch

input=torch.randn( 50, 512, 7, 7)
kernel_size=input.shape[ 2]
cbam = CBAMBlock(channel= 512,reduction= 16,kernel_size=kernel_size)
output=cbam(input)
print(output.shape)

7. BAM Attention Usage

7.1. Paper

"BAM: Bottleneck Attention Module"

7.2. Overview
a683dba114112e6401b50c5c37673e96.jpeg
7.3. Usage Code
from model.attention.BAM  import BAMBlock
import torch

input=torch.randn( 50, 512, 7, 7)
bam = BAMBlock(channel= 512,reduction= 16,dia_val= 2)
output=bam(input)
print(output.shape)

8. ECA Attention Usage

8.1. Paper

"ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks"

8.2. Overview
8f97606dc4d441e301da4491bba46884.jpeg
8.3. Usage Code
from model.attention.ECAAttention  import ECAAttention
import torch

input=torch.randn( 50, 512, 7, 7)
eca = ECAAttention(kernel_size= 3)
output=eca(input)
print(output.shape)

9. DANet Attention Usage

9.1. Paper

"Dual Attention Network for Scene Segmentation"

9.2. Overview
05ecffebd3d4f4bf74f8ca5e1bf33959.jpeg
9.3. Usage Code
from model.attention.DANet  import DAModule
import torch

input=torch.randn( 50, 512, 7, 7)
danet=DAModule(d_model= 512,kernel_size= 3,H= 7,W= 7)
print(danet(input).shape)

10. Pyramid Split Attention Usage

10.1. Paper

"EPSANet: An Efficient Pyramid Split Attention Block on Convolutional Neural Network"

10.2. Overview
63ebe69e1e5985648f7b2a33ec469432.jpeg
10.3. Usage Code
from model.attention.PSA  import PSA
import torch

input=torch.randn( 50, 512, 7, 7)
psa = PSA(channel= 512,reduction= 8)
output=psa(input)
print(output.shape)

11. Efficient Multi-Head Self-Attention Usage

11.1. Paper

"ResT: An Efficient Transformer for Visual Recognition"

11.2. Overview
62bd119bdea8bfbb8bc76941dcfb3b6d.jpeg
11.3. Usage Code

from model.attention.EMSA  import EMSA
import torch
from torch  import nn
from torch.nn  import functional  as F

input=torch.randn( 50, 64, 512)
emsa = EMSA(d_model= 512, d_k= 512, d_v= 512, h= 8,H= 8,W= 8,ratio= 2,apply_transform= True)
output=emsa(input,input,input)
print(output.shape)
    

12. Shuffle Attention Usage

12.1. Paper

"SA-NET: SHUFFLE ATTENTION FOR DEEP CONVOLUTIONAL NEURAL NETWORKS"

12.2. Overview
9312736b0068714a2567e04b7c830e58.jpeg
12.3. Usage Code

from model.attention.ShuffleAttention  import ShuffleAttention
import torch
from torch  import nn
from torch.nn  import functional  as F


input=torch.randn( 50, 512, 7, 7)
se = ShuffleAttention(channel= 512,G= 8)
output=se(input)
print(output.shape)

    

13. MUSE Attention Usage

13.1. Paper

"MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning"

13.2. Overview
f853d767b471dbac9329d29ea137b596.jpeg
13.3. Usage Code
from model.attention.MUSEAttention  import MUSEAttention
import torch
from torch  import nn
from torch.nn  import functional  as F


input=torch.randn( 50, 49, 512)
sa = MUSEAttention(d_model= 512, d_k= 512, d_v= 512, h= 8)
output=sa(input,input,input)
print(output.shape)

14. SGE Attention Usage

14.1. Paper

Spatial Group-wise Enhance: Improving Semantic Feature Learning in Convolutional Networks

14.2. Overview
e3d663771820a847d6c2a07ef466f49e.jpeg
14.3. Usage Code
from model.attention.SGE  import SpatialGroupEnhance
import torch
from torch  import nn
from torch.nn  import functional  as F

input=torch.randn( 50, 512, 7, 7)
sge = SpatialGroupEnhance(groups= 8)
output=sge(input)
print(output.shape)

15. A2 Attention Usage

15.1. Paper

A2-Nets: Double Attention Networks

15.2. Overview
790a1a7070d786233574336b7eb4e1f6.jpeg
15.3. Usage Code
from model.attention.A2Atttention  import DoubleAttention
import torch
from torch  import nn
from torch.nn  import functional  as F

input=torch.randn( 50, 512, 7, 7)
a2 = DoubleAttention( 512, 128, 128, True)
output=a2(input)
print(output.shape)

16. AFT Attention Usage

16.1. Paper

An Attention Free Transformer

16.2. Overview
cfb1dd18353531a2e41a3a7859cb7bdf.jpeg
16.3. Usage Code
from model.attention.AFT  import AFT_FULL
import torch
from torch  import nn
from torch.nn  import functional  as F

input=torch.randn( 50, 49, 512)
aft_full = AFT_FULL(d_model= 512, n= 49)
output=aft_full(input)
print(output.shape)

17. Outlook Attention Usage

17.1. Paper

VOLO: Vision Outlooker for Visual Recognition"

17.2. Overview
1b9177689fe510da8d96259febdf345a.jpeg
17.3. Usage Code
from model.attention.OutlookAttention  import OutlookAttention
import torch
from torch  import nn
from torch.nn  import functional  as F

input=torch.randn( 50, 28, 28, 512)
outlook = OutlookAttention(dim= 512)
output=outlook(input)
print(output.shape)

18. ViP Attention Usage

18.1. Paper

Vision Permutator: A Permutable MLP-Like Architecture for Visual Recognition"

18.2. Overview
18bf83a125070c1d5e898354481be168.jpeg
18.3. Usage Code

from model.attention.ViP  import WeightedPermuteMLP
import torch
from torch  import nn
from torch.nn  import functional  as F

input=torch.randn( 64, 8, 8, 512)
seg_dim= 8
vip=WeightedPermuteMLP( 512,seg_dim)
out=vip(input)
print(out.shape)

19. CoAtNet Attention Usage

19.1. Paper

CoAtNet: Marrying Convolution and Attention for All Data Sizes"

19.2. Overview

None

19.3. Usage Code

from model.attention.CoAtNet  import CoAtNet
import torch
from torch  import nn
from torch.nn  import functional  as F

input=torch.randn( 1, 3, 224, 224)
mbconv=CoAtNet(in_ch= 3,image_size= 224)
out=mbconv(input)
print(out.shape)

20. HaloNet Attention Usage

20.1. Paper

Scaling Local Self-Attention for Parameter Efficient Visual Backbones"

20.2. Overview
e45e84e76330662bf67cd53e9f4fda0a.jpeg
20.3. Usage Code

from model.attention.HaloAttention  import HaloAttention
import torch
from torch  import nn
from torch.nn  import functional  as F

input=torch.randn( 1, 512, 8, 8)
halo = HaloAttention(dim= 512,
    block_size= 2,
    halo_size= 1,)
output=halo(input)
print(output.shape)

21. Polarized Self-Attention Usage

21.1. Paper

Polarized Self-Attention: Towards High-quality Pixel-wise Regression"

21.2. Overview
753a3272f63a82f5e181940ca27c3d04.jpeg
21.3. Usage Code

from model.attention.PolarizedSelfAttention  import ParallelPolarizedSelfAttention,SequentialPolarizedSelfAttention
import torch
from torch  import nn
from torch.nn  import functional  as F

input=torch.randn( 1, 512, 7, 7)
psa = SequentialPolarizedSelfAttention(channel= 512)
output=psa(input)
print(output.shape)


22. CoTAttention Usage

22.1. Paper

Contextual Transformer Networks for Visual Recognition---arXiv 2021.07.26

22.2. Overview
6a4bb7d7cb71e1c079025aeb401c79f6.jpeg
22.3. Usage Code

from model.attention.CoTAttention  import CoTAttention
import torch
from torch  import nn
from torch.nn  import functional  as F

input=torch.randn( 50, 512, 7, 7)
cot = CoTAttention(dim= 512,kernel_size= 3)
output=cot(input)
print(output.shape)



23. Residual Attention Usage

23.1. Paper

Residual Attention: A Simple but Effective Method for Multi-Label Recognition---ICCV2021

23.2. Overview
70653e28b6d3a8fb84011334ca1cd1cc.jpeg
23.3. Usage Code

from model.attention.ResidualAttention  import ResidualAttention
import torch
from torch  import nn
from torch.nn  import functional  as F

input=torch.randn( 50, 512, 7, 7)
resatt = ResidualAttention(channel= 512,num_class= 1000,la= 0.2)
output=resatt(input)
print(output.shape)



24. S2 Attention Usage

24.1. Paper

S²-MLPv2: Improved Spatial-Shift MLP Architecture for Vision---arXiv 2021.08.02

24.2. Overview
6b3873e79f496c6360e33924f78e256c.jpeg
24.3. Usage Code
from model.attention.S2Attention  import S2Attention
import torch
from torch  import nn
from torch.nn  import functional  as F

input=torch.randn( 50, 512, 7, 7)
s2att = S2Attention(channels= 512)
output=s2att(input)
print(output.shape)

25. GFNet Attention Usage

25.1. Paper

Global Filter Networks for Image Classification---arXiv 2021.07.01

25.2. Overview
7cffe876bb5a80a3ea8d7ac9003bc181.jpeg
25.3. Usage Code - Implemented by Wenliang Zhao (Author)
from model.attention.gfnet  import GFNet
import torch
from torch  import nn
from torch.nn  import functional  as F

x = torch.randn( 1,  3,  224,  224)
gfnet = GFNet(embed_dim= 384, img_size= 224, patch_size= 16, num_classes= 1000)
out = gfnet(x)
print(out.shape)

26. TripletAttention Usage

26.1. Paper

Rotate to Attend: Convolutional Triplet Attention Module---CVPR 2021

26.2. Overview
103ae7c2abdaafa7ab194989c74f23c1.jpeg
26.3. Usage Code - Implemented by digantamisra98
from model.attention.TripletAttention  import TripletAttention
import torch
from torch  import nn
from torch.nn  import functional  as F
input=torch.randn( 50, 512, 7, 7)
triplet = TripletAttention()
output=triplet(input)
print(output.shape)

27. Coordinate Attention Usage

27.1. Paper

Coordinate Attention for Efficient Mobile Network Design---CVPR 2021

27.2. Overview
a5929b6a14d2ea55658ce07b10112b86.jpeg
27.3. Usage Code - Implemented by Andrew-Qibin
from model.attention.CoordAttention  import CoordAtt
import torch
from torch  import nn
from torch.nn  import functional  as F

inp=torch.rand([ 2,  96,  56,  56])
inp_dim, oup_dim =  96,  96
reduction= 32

coord_attention = CoordAtt(inp_dim, oup_dim, reduction=reduction)
output=coord_attention(inp)
print(output.shape)

28. MobileViT Attention Usage

28.1. Paper

MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer---ArXiv 2021.10.05

28.2. Overview
dee603b5567bff8999a3fba41db77d02.jpeg
28.3. Usage Code
from model.attention.MobileViTAttention  import MobileViTAttention
import torch
from torch  import nn
from torch.nn  import functional  as F

if __name__ ==  '__main__':
    m=MobileViTAttention()
    input=torch.randn( 1, 3, 49, 49)
    output=m(input)
    print(output.shape)   #output:(1,3,49,49)
    

29. ParNet Attention Usage

29.1. Paper

Non-deep Networks---ArXiv 2021.10.20

29.2. Overview
d2021299c1a03950ff52639d0483da3e.jpeg
29.3. Usage Code
from model.attention.ParNetAttention  import *
import torch
from torch  import nn
from torch.nn  import functional  as F

if __name__ ==  '__main__':
    input=torch.randn( 50, 512, 7, 7)
    pna = ParNetAttention(channel= 512)
    output=pna(input)
    print(output.shape)  #50,512,7,7
    

30. UFO Attention Usage

30.1. Paper

UFO-ViT: High Performance Linear Vision Transformer without Softmax---ArXiv 2021.09.29

30.2. Overview
ec1ad4123ad0784c126f2d4887ef6472.jpeg
30.3. Usage Code
from model.attention.UFOAttention  import *
import torch
from torch  import nn
from torch.nn  import functional  as F

if __name__ ==  '__main__':
    input=torch.randn( 50, 49, 512)
    ufo = UFOAttention(d_model= 512, d_k= 512, d_v= 512, h= 8)
    output=ufo(input,input,input)
    print(output.shape)  #[50, 49, 512]
    

31. MobileViTv2 Attention Usage

31.1. Paper

Separable Self-attention for Mobile Vision Transformers---ArXiv 2022.06.06

31.2. Overview
2b81013e2fb3321e4c5447f40bb18623.jpeg
31.3. Usage Code
from model.attention.UFOAttention  import *
import torch
from torch  import nn
from torch.nn  import functional  as F

if __name__ ==  '__main__':
    input=torch.randn( 50, 49, 512)
    ufo = UFOAttention(d_model= 512, d_k= 512, d_v= 512, h= 8)
    output=ufo(input,input,input)
    print(output.shape)  #[50, 49, 512]
    

Backbone Series

1. ResNet Usage

1.1. Paper

"Deep Residual Learning for Image Recognition---CVPR2016 Best Paper"

1.2. Overview
8deb859a8e1aa117ddea72fc9ab2b73c.jpeg 9296caf356c172a9482a29db067a3327.jpeg
1.3. Usage Code

from model.backbone.resnet  import ResNet50,ResNet101,ResNet152
import torch
if __name__ ==  '__main__':
    input=torch.randn( 50, 3, 224, 224)
    resnet50=ResNet50( 1000)
     # resnet101=ResNet101(1000)
     # resnet152=ResNet152(1000)
    out=resnet50(input)
    print(out.shape)

2. ResNeXt Usage

2.1. Paper

"Aggregated Residual Transformations for Deep Neural Networks---CVPR2017"

2.2. Overview
3635637da4b46736933f2bc992913e26.jpeg
2.3. Usage Code

from model.backbone.resnext  import ResNeXt50,ResNeXt101,ResNeXt152
import torch

if __name__ ==  '__main__':
    input=torch.randn( 50, 3, 224, 224)
    resnext50=ResNeXt50( 1000)
     # resnext101=ResNeXt101(1000)
     # resnext152=ResNeXt152(1000)
    out=resnext50(input)
    print(out.shape)


3. MobileViT Usage

3.1. Paper

MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer---ArXiv 2020.10.05

3.2. Overview
50197e84a437af9257e4df6c3051ae2c.jpeg
3.3. Usage Code

from model.backbone.MobileViT  import *
import torch
from torch  import nn
from torch.nn  import functional  as F

if __name__ ==  '__main__':
    input=torch.randn( 1, 3, 224, 224)

     ### mobilevit_xxs
    mvit_xxs=mobilevit_xxs()
    out=mvit_xxs(input)
    print(out.shape)

     ### mobilevit_xs
    mvit_xs=mobilevit_xs()
    out=mvit_xs(input)
    print(out.shape)


     ### mobilevit_s
    mvit_s=mobilevit_s()
    out=mvit_s(input)
    print(out.shape)

4. ConvMixer Usage

4.1. Paper

Patches Are All You Need?---ICLR2022 (Under Review)

4.2. Overview
5585424bc56df98d514d317debe05eaf.jpeg
4.3. Usage Code

from model.backbone.ConvMixer  import *
import torch
from torch  import nn
from torch.nn  import functional  as F

if __name__ ==  '__main__':
    x=torch.randn( 1, 3, 224, 224)
    convmixer=ConvMixer(dim= 512,depth= 12)
    out=convmixer(x)
    print(out.shape)   #[1, 1000]


MLP Series

1. RepMLP Usage

1.1. Paper

"RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition"

1.2. Overview
c59ccabd5e6060df8773e8b79338cfdc.jpeg
1.3. Usage Code
from model.mlp.repmlp  import RepMLP
import torch
from torch  import nn

N= 4  #batch size
C= 512  #input dim
O= 1024  #output dim
H= 14  #image height
W= 14  #image width
h= 7  #patch height
w= 7  #patch width
fc1_fc2_reduction= 1  #reduction ratio
fc3_groups= 8  # groups
repconv_kernels=[ 1, 3, 5, 7]  #kernel list
repmlp=RepMLP(C,O,H,W,h,w,fc1_fc2_reduction,fc3_groups,repconv_kernels=repconv_kernels)
x=torch.randn(N,C,H,W)
repmlp.eval()
for module  in repmlp.modules():
     if isinstance(module, nn.BatchNorm2d)  or isinstance(module, nn.BatchNorm1d):
        nn.init.uniform_(module.running_mean,  0,  0.1)
        nn.init.uniform_(module.running_var,  0,  0.1)
        nn.init.uniform_(module.weight,  0,  0.1)
        nn.init.uniform_(module.bias,  0,  0.1)

#training result
out=repmlp(x)
#inference result
repmlp.switch_to_deploy()
deployout = repmlp(x)

print(((deployout-out)** 2).sum())

2. MLP-Mixer Usage

2.1. Paper

"MLP-Mixer: An all-MLP Architecture for Vision"

2.2. Overview
92a385825da4fc31ccd64656ca02c814.jpeg
2.3. Usage Code
from model.mlp.mlp_mixer  import MlpMixer
import torch
mlp_mixer=MlpMixer(num_classes= 1000,num_blocks= 10,patch_size= 10,tokens_hidden_dim= 32,channels_hidden_dim= 1024,tokens_mlp_dim= 16,channels_mlp_dim= 1024)
input=torch.randn( 50, 3, 40, 40)
output=mlp_mixer(input)
print(output.shape)

3. ResMLP Usage

3.1. Paper

"ResMLP: Feedforward networks for image classification with data-efficient training"

3.2. Overview
32f6996663646c4c154014a4891ed4d9.jpeg
3.3. Usage Code
from model.mlp.resmlp  import ResMLP
import torch

input=torch.randn( 50, 3, 14, 14)
resmlp=ResMLP(dim= 128,image_size= 14,patch_size= 7,class_num= 1000)
out=resmlp(input)
print(out.shape)  #the last dimention is class_num

4. gMLP Usage

4.1. Paper

"Pay Attention to MLPs"

4.2. Overview
e5d3387d845820423325dc55ce282ebb.jpeg
4.3. Usage Code
from model.mlp.g_mlp  import gMLP
import torch

num_tokens= 10000
bs= 50
len_sen= 49
num_layers= 6
input=torch.randint(num_tokens,(bs,len_sen))  #bs,len_sen
gmlp = gMLP(num_tokens=num_tokens,len_sen=len_sen,dim= 512,d_ff= 1024)
output=gmlp(input)
print(output.shape)

5. sMLP Usage

5.1. Paper

"Sparse MLP for Image Recognition: Is Self-Attention Really Necessary?"

5.2. Overview
c171cc2f378be86e49a2dbd678707772.jpeg
5.3. Usage Code
from model.mlp.sMLP_block  import sMLPBlock
import torch
from torch  import nn
from torch.nn  import functional  as F

if __name__ ==  '__main__':
    input=torch.randn( 50, 3, 224, 224)
    smlp=sMLPBlock(h= 224,w= 224)
    out=smlp(input)
    print(out.shape)

Re-Parameter Series

1. RepVGG Usage

1.1. Paper

"RepVGG: Making VGG-style ConvNets Great Again"

1.2. Overview
87f8ea54d81fdb6207bc19ebb939a1c3.jpeg
1.3. Usage Code

from model.rep.repvgg  import RepBlock
import torch


input=torch.randn( 50, 512, 49, 49)
repblock=RepBlock( 512, 512)
repblock.eval()
out=repblock(input)
repblock._switch_to_deploy()
out2=repblock(input)
print( 'difference between vgg and repvgg')
print(((out2-out)** 2).sum())

2. ACNet Usage

2.1. Paper

"ACNet: Strengthening the Kernel Skeletons for Powerful CNN via Asymmetric Convolution Blocks"

2.2. Overview
f44742ab99f8c0122903ae293ef813cf.jpeg
2.3. Usage Code
from model.rep.acnet  import ACNet
import torch
from torch  import nn

input=torch.randn( 50, 512, 49, 49)
acnet=ACNet( 512, 512)
acnet.eval()
out=acnet(input)
acnet._switch_to_deploy()
out2=acnet(input)
print( 'difference:')
print(((out2-out)** 2).sum())

2. Diverse Branch Block Usage

2.1. Paper

"Diverse Branch Block: Building a Convolution as an Inception-like Unit"

2.2. Overview
5408daffd5e00cf8fcffc7f30f8f2993.jpeg
2.3. Usage Code
2.3.1 Transform I
from model.rep.ddb  import transI_conv_bn
import torch
from torch  import nn
from torch.nn  import functional  as F

input=torch.randn( 1, 64, 7, 7)
#conv+bn
conv1=nn.Conv2d( 64, 64, 3,padding= 1)
bn1=nn.BatchNorm2d( 64)
bn1.eval()
out1=bn1(conv1(input))

#conv_fuse
conv_fuse=nn.Conv2d( 64, 64, 3,padding= 1)
conv_fuse.weight.data,conv_fuse.bias.data=transI_conv_bn(conv1,bn1)
out2=conv_fuse(input)

print( "difference:",((out2-out1)** 2).sum().item())
2.3.2 Transform II
from model.rep.ddb  import transII_conv_branch
import torch
from torch  import nn
from torch.nn  import functional  as F

input=torch.randn( 1, 64, 7, 7)

#conv+conv
conv1=nn.Conv2d( 64, 64, 3,padding= 1)
conv2=nn.Conv2d( 64, 64, 3,padding= 1)
out1=conv1(input)+conv2(input)

#conv_fuse
conv_fuse=nn.Conv2d( 64, 64, 3,padding= 1)
conv_fuse.weight.data,conv_fuse.bias.data=transII_conv_branch(conv1,conv2)
out2=conv_fuse(input)

print( "difference:",((out2-out1)** 2).sum().item())
2.3.3 Transform III
from model.rep.ddb  import transIII_conv_sequential
import torch
from torch  import nn
from torch.nn  import functional  as F

input=torch.randn( 1, 64, 7, 7)

#conv+conv
conv1=nn.Conv2d( 64, 64, 1,padding= 0,bias= False)
conv2=nn.Conv2d( 64, 64, 3,padding= 1,bias= False)
out1=conv2(conv1(input))


#conv_fuse
conv_fuse=nn.Conv2d( 64, 64, 3,padding= 1,bias= False)
conv_fuse.weight.data=transIII_conv_sequential(conv1,conv2)
out2=conv_fuse(input)

print( "difference:",((out2-out1)** 2).sum().item())
2.3.4 Transform IV
from model.rep.ddb  import transIV_conv_concat
import torch
from torch  import nn
from torch.nn  import functional  as F

input=torch.randn( 1, 64, 7, 7)

#conv+conv
conv1=nn.Conv2d( 64, 32, 3,padding= 1)
conv2=nn.Conv2d( 64, 32, 3,padding= 1)
out1=torch.cat([conv1(input),conv2(input)],dim= 1)

#conv_fuse
conv_fuse=nn.Conv2d( 64, 64, 3,padding= 1)
conv_fuse.weight.data,conv_fuse.bias.data=transIV_conv_concat(conv1,conv2)
out2=conv_fuse(input)

print( "difference:",((out2-out1)** 2).sum().item())
2.3.5 Transform V
from model.rep.ddb  import transV_avg
import torch
from torch  import nn
from torch.nn  import functional  as F

input=torch.randn( 1, 64, 7, 7)

avg=nn.AvgPool2d(kernel_size= 3,stride= 1)
out1=avg(input)

conv=transV_avg( 64, 3)
out2=conv(input)

print( "difference:",((out2-out1)** 2).sum().item())
2.3.6 Transform VI
from model.rep.ddb  import transVI_conv_scale
import torch
from torch  import nn
from torch.nn  import functional  as F

input=torch.randn( 1, 64, 7, 7)

#conv+conv
conv1x1=nn.Conv2d( 64, 64, 1)
conv1x3=nn.Conv2d( 64, 64,( 1, 3),padding=( 0, 1))
conv3x1=nn.Conv2d( 64, 64,( 3, 1),padding=( 1, 0))
out1=conv1x1(input)+conv1x3(input)+conv3x1(input)

#conv_fuse
conv_fuse=nn.Conv2d( 64, 64, 3,padding= 1)
conv_fuse.weight.data,conv_fuse.bias.data=transVI_conv_scale(conv1x1,conv1x3,conv3x1)
out2=conv_fuse(input)

print( "difference:",((out2-out1)** 2).sum().item())

Convolution Series

1. Depthwise Separable Convolution Usage

1.1. Paper

"MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"

1.2. Overview
b47d48e0c0dc9c6c4af28d7661605bb0.jpeg
1.3. Usage Code
from model.conv.DepthwiseSeparableConvolution  import DepthwiseSeparableConvolution
import torch
from torch  import nn
from torch.nn  import functional  as F

input=torch.randn( 1, 3, 224, 224)
dsconv=DepthwiseSeparableConvolution( 3, 64)
out=dsconv(input)
print(out.shape)

2. MBConv Usage

2.1. Paper

"Efficientnet: Rethinking model scaling for convolutional neural networks"

2.2. Overview
17ec969b7a17a9d884d3abd6ab748db0.jpeg
2.3. Usage Code
from model.conv.MBConv  import MBConvBlock
import torch
from torch  import nn
from torch.nn  import functional  as F

input=torch.randn( 1, 3, 224, 224)
mbconv=MBConvBlock(ksize= 3,input_filters= 3,output_filters= 512,image_size= 224)
out=mbconv(input)
print(out.shape)


3. Involution Usage

3.1. Paper

"Involution: Inverting the Inherence of Convolution for Visual Recognition"

3.2. Overview
c44c9c9f336835de973adb3fc53e28e7.jpeg
3.3. Usage Code
from model.conv.Involution  import Involution
import torch
from torch  import nn
from torch.nn  import functional  as F

input=torch.randn( 1, 4, 64, 64)
involution=Involution(kernel_size= 3,in_channel= 4,stride= 2)
out=involution(input)
print(out.shape)

4. DynamicConv Usage

4.1. Paper

"Dynamic Convolution: Attention over Convolution Kernels"

4.2. Overview
54a23b6088f71d4bc61071efc891352e.jpeg
4.3. Usage Code
from model.conv.DynamicConv  import *
import torch
from torch  import nn
from torch.nn  import functional  as F

if __name__ ==  '__main__':
    input=torch.randn( 2, 32, 64, 64)
    m=DynamicConv(in_planes= 32,out_planes= 64,kernel_size= 3,stride= 1,padding= 1,bias= False)
    out=m(input)
    print(out.shape)  # 2,32,64,64

5. CondConv Usage

5.1. Paper

"CondConv: Conditionally Parameterized Convolut ions for Efficient Inference"

5.2. Overview
30bb6fe0c5ded0ce3e035280d8b92455.jpeg
5.3. Usage Code
from model.conv.CondConv  import *
import torch
from torch  import nn
from torch.nn  import functional  as F

if __name__ ==  '__main__':
    input=torch.randn( 2, 32, 64, 64)
    m=CondConv(in_planes= 32,out_planes= 64,kernel_size= 3,stride= 1,padding= 1,bias= False)
    out=m(input)
    print(out.shape)

已建立深度学习公众号——FightingCV,欢迎大家关注!!!

ICCV、CVPR、NeurIPS、ICML论文解析汇总:https://github.com/xmu-xiaoma666/FightingCV-Paper-Reading

面向小白的Attention、重参数、MLP、卷积核心代码学习:https://github.com/xmu-xiaoma666/External-Attention-pytorch

加入交流群,请添加小助手wx:FightngCV666



  • 2
    点赞
  • 15
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值