CVPR-2020
文章目录
1 Background and Motivation
本文 present a new network design paradigm
类似于手动设计一些网络设计准则,区别于 AutoML 针对特定数据集和预定的算子学出来一个网络(finding the
best network instances within a fixed, manually designed search space——which we call a design space),作者设计网络准则,基于该准则可以设计出很多性能不错的网络——design network design spaces(not just particular network instantiations, but also design principles)
Our goal is to help advance the understanding of network design and discover design principles that generalize across settings.
2 Related Work
-
Manual network design
早些年的网络都属于这个范畴,比如 VGG / ResNet / DenseNet / MobileNet
In fact, our methodology is analogous to manual design but performed at the design space level(虽然也是手动,但设计网络准则,而不是单个网络) -
Automated network design
NAS 等 -
Network scaling
eg:EfficientNet -
Comparing networks
a methodology for comparing and analyzing populations of networks sampled from a design space(distribution-level view) -
Parameterization
3 Advantages / Contributions
提出了一些设计准则,基于这些准则设计出了系列网络(不同大小)——RegNet
outperform the popular EfficientNet(SOTA) models while being up to 5× faster on GPUs
4 Design Space Design
The overall process is analogous to manual design, elevated to the population level(系列网络,而不是单个网络) and guided via distribution estimates of network design spaces
RegNet,regular Network
AnyNet,widths and depths vary freely across stages
compute regimes, schedule lengths, and network block types
4.1 Tools for Design Space Design
space by sampling a set of models from that design space and characterizing the resulting model error distribution
comparing distributions is more robust and informative than using search
error empirical distribution function (EDF)
EDF for n = 500 sampled models from the AnyNetX design space
图例里面是 [x | y] 分别表示 min error 和 mean error
分布图的含义, all(深蓝色点),good(浅蓝色区域),best(黑色线)
MF = million flops = 1 0 6 10^6 106
GF = 1 0 9 10^9 109
设计 design space 的流程如下
EDF 的计算细节如下
用的 bootstrap 有放回的抽样
95% CI(Confidence Interval)
计算方法就是 上图的横坐标,涉及到平均值和方差,还有关键的 1.96
换句话说,95%置信区间是评价总体平均值的一个范围。我们进行100组实验,只有5组实验数据的平均值是落在这个范围之外的
4.2 The AnyNet Design Space
AnyNet 的网络结构
- stem(下采样一次)
- body(下采样四次,4 个 stage)
- head
We keep the stem and head fixed and as simple as possible, and instead focus on the structure of the network body
作者 design 的 design space 集中在 body 的
- stage i i i,共 4 种
- the number of blocks d i ⩽ 16 d_i \leqslant 16 di⩽16,16种
- block widths w i ⩽ 1024 w_i \leqslant 1024 wi⩽1024 and divisible by 8,128 种
- bottleneck ratios b i ∈ { 1 , 2 , 4 } b_i \in \{1,2,4\} bi∈{1,2,4},3种
- group widths g i ∈ { 1 , 2 , 4 , 8 , 16 , 32 } g_i \in \{1,2,4,8,16,32\} gi∈{1,2,4,8,16,32},6 种
作者的目标
- to simplify the structure of the design space,
- to improve the interpretability of the design space,
- to improve or maintain the design space quality,
- to maintain model diversity in the design space.
不同的网络空间
1)AnyNetXA
搜索空间 ( 16 ⋅ 128 ⋅ 3 ⋅ 6 ) 4 ≈ 1.85 ⋅ 1 0 18 (16 \cdot 128 \cdot 3 \cdot 6 )^4 \approx 1.85 \cdot 10^{18} (16⋅128⋅3⋅6)4≈1.85⋅1018
2)AnyNetXB
shared bottleneck ratio
b
i
=
b
b_i = b
bi=b for all stages
i
i
i
AnyNetXA 和 AnyNetXB 的比较如图 5 left 所示
b ⩽ 2 b \leqslant 2 b⩽2 is best in this regime (right,还是 all-good-best)
最好的是 1,说明既不是 bottleneck 也不是 inverted bottleneck
3)AnyNetXC
use a shared group width
g
i
=
g
g_i = g
gi=g for all stages to obtain AnyNetXC
AnyNetXA 和 AnyNetXB 的比较如图 5 middle 所示
find g > 1 is best(图中没有展示出来,也即 DW 不是最优的)
作者进一步 examine typical network structures of both good and bad networks from AnyNetXC
发现效果比较好的网络遵循
w
i
+
1
≥
w
i
w_{i+1} \geq w_i
wi+1≥wi,于是加了这个限制条件设计出了 AnyNetXD
4)AnyNetXD
w
i
+
1
≥
w
i
w_{i+1} \geq w_i
wi+1≥wi
图 7 left 所示
5)AnyNetXE
d
i
+
1
≥
d
i
d_{i+1} \geq d_i
di+1≥di
图 7 right 所示
搜索空间变成了 3867 ⋅ 11716640 ⋅ 3 ⋅ 6 ≈ 8.17 ⋅ 1 0 11 3867\cdot 11716640\cdot 3 \cdot 6 \approx 8.17 \cdot 10^{11} 3867⋅11716640⋅3⋅6≈8.17⋅1011
3867 是由 16 满足 ≤ 条件推算出来的
11716640 同理,是 128 推算出来的(和作者给出的 table 1 是有点出入的)
复杂度如作者描述的那样,cumulative reduction of O ( 1 0 7 ) O(10^7) O(107) from AnyNetXA( 1 0 18 10^{18} 1018-> 1 0 11 10^{11} 1011)
4.3 The RegNet Design Space
i
i
i and
j
j
j to index over stages and blocks(也即深度,深度的最小单位是 block 而非 conv)
w j w_j wj = 48 ⋅ ( j + 1 ) 48·(j+1) 48⋅(j+1) for 0 ⩽ j ⩽ 20 0 \leqslant j \leqslant 20 0⩽j⩽20 (solid black curve, please note that the y-axis is logarithmic,y 轴这点很关键,不然你肯定会有疑问,图8左上那张图,黑色明明是条曲线,怎么还是个线性方程).
linear parameterization for block widths
u j = w 0 + w a ⋅ j f o r 0 ⩽ j < d u_j = w_0 + w_a \cdot j \space \space \space \space for \space \space 0 \leqslant j < d uj=w0+wa⋅j for 0⩽j<d
- j j j 表示网络的 depth(最小深度单元是 block,而非单个 conv)
- w 0 w_0 w0 ,initial width(通道数)
- w a w_a wa ,slope
不同的深度,不同的初始化通道数,和不同的通道数增长率(block 内的 conv 间),generates a different block width u j u_j uj
block widths
u j = w 0 ⋅ w m s j u_j = w_0 \cdot w_m^{s_j} uj=w0⋅wmsj
由 w 0 w_0 w0、 w m w_m wm 和 s j s_j sj 重新表示下 u j u_j uj,已知 w 0 w_0 w0、 w a w_a wa 可以求出 u j u_j uj 进而结合 w m w_m wm 可以求出 s j s_j sj
quantized per-block widths w j w_j wj
w j = w 0 ⋅ w m ⌊ s j ⌉ w_j = w_0 \cdot w_m^{\left \lfloor s_j \right \rceil} wj=w0⋅wm⌊sj⌉
其中 ⌊ ⌉ \left \lfloor \right \rceil ⌊⌉ 表示 round 操作, s j s_j sj round 后就可以推导出实际网络的 block widths
这也就是这个图为啥同时给出了 w a w_a wa 、 w 0 w_0 w0、 w m w_m wm 的含义
感觉这个 ⌊ ⌉ \left \lfloor \right \rceil ⌊⌉ 也是 quantized linear 限制条件的核心操作了
横坐标 block index
j
j
j(也即深度),纵坐标
w
j
w_j
wj(通道数)
RegNet = AnyNetXE + w w w 和 d d d 之间的约束 ( d d d < 64, w 0 w_0 w0, w a w_a wa < 256, 1.5 < w m w_m wm < 3 and b b b and g g g as before)
w
0
=
w
a
w_0 = w_a
w0=wa 比较猛
4.4 Design Space Generalization
- R block: same as the X block except without groups,
- V block: a basic block with only a single 3×3 conv,
- VR block: same as V block plus residual connections.
5 Experiments
5.1 Datasets
- ImageNet
- ImageNetv2
5.2 Analyzing the RegNetX Design Space
1)RegNet trends
optimal depth of ~20 blocks (60 layers)
use a bottleneck ratio b of 1.0(既不是 bottleneck,也不是 inverted bottleneck)
w
m
w_m
wm of good models is ~2.5(not identical to the popular recipe of doubling widths across stages)
g , w a , w 0 g, w_a, w_0 g,wa,w0 increase with complexity
2)Complexity analysis
activations, which we define as the size of the output tensors of all conv layers
左上角图中,
w
w
w 是通道数,
r
2
r^2
r2 是分辨率,
g
g
g 注意是组宽而不是组数
上面一排靠后的两个图中的 r r r 应该是斜率
activations increase with the square-root of flops, parameters increase linearly, and runtime is best modeled using both a linear and a square-root term due to its dependence on both flops and activations.
横纵坐标比例不一样,所以看起来线性关系变弯了
3)RegNetX constrained
RegNetX Unconstrained 加上下面两个限制条件
- set b b b = 1, d ≤ 40 d \leq 40 d≤40, and w m ≥ 2 w_m \geq 2 wm≥2
- we limit parameters and activations,following Figure 12 (bottom).(感觉范围是卡在了 good,不可能是 best 吧,哈哈哈)
变成了 RegNetX Constrained
4)Alternate design choices
左图可以看出,inverted bottleneck degrades the EDF slightly and depthwise conv performs even worse relative
中间图可以看出,a fixed resolution of 224×224 is best,
5)SE
X+SE as Y
图 14 右图可以看出,SE 还是蛮有效果的
5.3 Comparison to Existing Networks
先看看 RegNetX 和 RegNetY 的一些特点
higher flop models have a large number of blocks in the third stage and a small number of blocks in the last stage.(加上 stem 结构,也即 1/16 和 1/32,【FD-MobileNet】《FD-MobileNet:Improved MobileNet with a Fast Downsampling Strategy》 的优化符合此规律)
the group width g g g increases with complexity, but depth d d d saturates for large models.
1)State-of-the-Art Comparison: Mobile Regime
2)Standard Baselines Comparison: ResNe(X)t
RegNet 整体来说要比 ResNet 和 ResNext 要好一些
3)State-of-the-Art Comparison: Full Regime
flops 较小的时候, EfficientNet 比作者的方法猛,flops 上来后,RegNet 猛
RegNet 的一大优势是很快,训练推理
最后一栏 grayed 的结果是考虑了一些训练策略,如下
EfficientNet 还是猛呀
5.4 Appendix
1)Test Set Evaluation
在 ImageNet v2 数据集上验证下效果
趋势和 ImageNet 上一致
2)Additional Ablations
-
Fixed depth
差不多 -
Fewer stages
4 stage 比 3 stage 好 -
Inverted Bottleneck
b < 1 degrades results further -
Swish vs. ReLU
图 20 中间的图可以看出,flops 比较小的时候,非 dw conv,Swish 比 ReLU 要好一些图 20 右边的图可以卡出,if g g g is restricted to be 1 (depthwise conv), Swish performs much better than ReLU
3)Optimization Settings
6 Conclusion(own) / Future work
-
widths and depths of good networks can be explained by a quantized linear function.
-
the depth of the best models is stable across compute regimes (~20 blocks) and that the best models do not use either a bottleneck or inverted bottleneck.
-
We highlight the improvements for fixed activations, which is of high practical interest as the number of activations can strongly influence the runtime on accelerators such as GPUs.
-
regime = flops
摘抄一些优秀的解读
《Designing Network Design Spaces》的整体解读(一篇更比六篇强)
批注:image resolution 的224->448 改变在文中是没有什么作用的
这个看 EfficientNet 的大哥把我逗笑了,哈哈哈,确实,efficientnet 在较小的 regime 上表现比 RegNet 的效果好(毕竟 AotoML 出来的,效果上还是杠杠的)