imagenet 测试精度合集

目录

ConvNeXt:新一代卷积网络,还不是ViT的时代(代码开源)

GENet

regnet

poolformer

CoTNet

ParNet

Convolutional-MLPs

renet精度

repvgg

ConvMixer

hs-resnet50

RedNet

nfnet

SimAM


ConvNeXt:新一代卷积网络,还不是ViT的时代(代码开源)

ConvNeXt:新一代卷积网络,还不是ViT的时代(代码开源)

GENet

GPU端吊打RegNet、EfficientNet的强悍担当:GENet_喜欢打酱油的老鸟的博客-CSDN博客_efficientnet网络

regnet

regnet_AI视觉网奇的博客-CSDN博客

poolformer

https://github.com/sail-sg/poolformer

模型46M,精度77.2

模型82M,精度80.3

Model#paramsImage resolutionTop1 AccDownload
poolformer_s1212M22477.2here
poolformer_s2421M22480.3here
poolformer_s3631M22481.4here
poolformer_m3656M22482.1here
poolformer_m4873M22482.5here

CoTNet

京东开源的:

https://github.com/JDAI-CV/CoTNet

CoTNet-5022422.2M3.381.395.6GoogleDrive / Baidu
CoTNeXt-5022430.1M4.382.195.9GoogleDrive / Baidu
SE-CoTNetD-5022423.1M4.181.695.8GoogleDrive / Baidu
CoTNet-10122438.3M6.182.896.2GoogleDrive / Baidu
CoTNeXt-10122453.4M8.283.296.4GoogleDrive / Baidu
SE-CoTNetD-10122440.9M8.583.296.5GoogleDrive / Baidu
SE-CoTNetD-15222455.8M17.084.097.0GoogleDrive / Baidu
SE-CoTNetD-15232055.8M26.584.697.1GoogleDrive / Baidu

ParNet

12层也能媲美ResNet

代码好像已开源,没有预训练:

https://github.com/imankgoyal/NonDeepNetworks

本机整理的代码:

F:\project\cls\NonDeepNetworks

simpnet.py

demo.py

trt_demo.py

onnx2trt.py

Convolutional-MLPs

https://github.com/SHI-Labs/Convolutional-MLPs

36m模型,精度76.8,比resent-50稍微好一点。有下载链接:

DatasetModelTop-1 Accuracy# ParamsMACs
ImageNetConvMLP-S76.8%9.0M2.4G
ConvMLP-M79.0%17.4M3.9G
ConvMLP-L80.2%42.7M9.9G

renet精度

NetworkTop-1 errorTop-5 error
ResNet-1869.5789.24
ResNet-3473.2791.26
ResNet-5075.9992.98
ResNet-10177.5693.79
ResNet-15277.8493.84
ResNet-20078.3494.21

repvgg

有预训练地址:

GitHub - DingXiaoH/RepVGG: RepVGG: Making VGG-style ConvNets Great Again

提供了百度网盘地址:

网络结构:

https://github.com/megvii-research/basecls/tree/main/zoo/public/repvgg

ConvMixer

讲解:

ConvMixer来了!单挑ResNet、ViT、MLP-Mixer的简单模型

代码:

GitHub - tmp-iclr/convmixer

hs-resnet50

RedNet

https://github.com/d-li14/involution

ModelParams(M)FLOPs(G)Top-1 (%)Top-5 (%)ConfigDownload
RedNet-269.23(32.8%↓)1.73(29.2%↓)75.9693.19configmodel | log
RedNet-3812.39(36.7%↓)2.22(31.3%↓)77.4893.57configmodel | log
RedNet-5015.54(39.5%↓)2.71(34.1%↓)78.3594.13configmodel | log
RedNet-10125.65(42.6%↓)4.74(40.5%↓)78.9294.35configmodel | log
RedNet-15233.99(43.5%↓)6.79(41.4%↓)79.1294.38configmodel | log

nfnet

我发现一个问题,第一层只有16个通道

deepmind-research/nfnets at master · deepmind/deepmind-research · GitHub

  • 论文链接:https://arxiv.org/abs/2102.06171

  • DeepMind 还放出了模型的实现:https://github.com/deepmind/deepmind-research/tree/master/nfnets

SimAM

GitHub - ZjjConan/SimAM: The official pytorch implemention of our ICML paper "SimAM: A Simple, Parameter-Free Attention Module for Convolutional Neural Networks".

ModelParametersFLOPsTop-1(%)Top-5(%)
SimAM-R1811.69 M1.82 G71.3189.88
SimAM-R3221.80 M3.67 G74.4992.02
SimAM-R5025.56 M4.11 G77.4593.66
SimAM-R10144.55 M7.83 G78.6594.11
SimAM-RX50 (32x4d)25.03 M4.26 G78.0093.93
SimAM-MV23.50 M0.31 G72.3690.74

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

AI算法网奇

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值