PyTorch中FLOPs计算问题

       最近看了很多关于FLOPs计算的实现方法,也自己尝试了一些方法,发现最好用的还是PyTorch中的thop库(代码如下):

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = 模型的名字().to(device)
inputs = torch.randn(1,3,512,1024)   ####(360,640)
inputs=inputs.to(device)
macs, params = profile(model,inputs=(inputs,))   ##verbose=False
print('The number of MACs is %s'%(macs/1e9))   ##### MB
print('The number of params is %s'%(params/1e6))   ##### MB

        实现起来确实很简单,那么问题来了,这里面算出来的macs到底是MACs还是FLOPs呢?先说我自己探索得到的结论,这里计算出的macs其实就是FLOPs(每秒钟浮点运算次数),前提是:不计算卷积层的bias,原因如下:

        自己手动计算ResNet18的FLOPs,对于512*1024*3的输入尺寸。

(1)ResNet18的代码:

class BasicBlock(nn.Module):
    expansion = 1

    def __init__(self, inplanes, planes, r=1, stride=1, downsample=None, norm_layer=nn.BatchNorm2d):
        super(BasicBlock, self).__init__()
        # Both self.conv1 and self.downsample layers downsample the input when stride != 1
        self.conv1 = conv3x3(inplanes, planes, stride)
        self.bn1 = norm_layer(planes)
        self.relu = nn.ReLU(inplace=True)
        self.conv2 = conv3x3(planes, planes, dilation=r)
        self.bn2 = norm_layer(planes)
        self.downsample = downsample
        self.stride = stride

    def forward(self, x):
        identity = x

        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)

        out = self.conv2(out)
        out = self.bn2(out)

        if self.downsample is not None:
            identity = self.downsample(x)

        out += identity
        out = self.relu(out)
        return out


class ResNet18(nn.Module):

    def __init__(self, block=BasicBlock, layers=[2,2,2,2], zero_init_residual=False, norm_layer=nn.BatchNorm2d):
        super(ResNet18, self).__init__()
        self.inplanes = 64
        self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,
                               bias=False)
        self.bn1 = norm_layer(64)
        self.relu = nn.ReLU(inplace=True)
        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
        self.layer1 = self._make_layer(block, 64, layers[0], r=2, norm_layer=norm_layer)
        self.layer2 = self._make_layer(block, 128, layers[1], stride=2, r=2, norm_layer=norm_layer)
        self.layer3 = self._make_layer(block, 256, layers[2], stride=2, r=2, norm_layer=norm_layer)
        self.layer4 = self._make_layer(block, 512, layers[3], stride=2, r=2, norm_layer=norm_layer)

        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
            elif isinstance(m, nn.BatchNorm2d):
                nn.init.constant_(m.weight, 1)
                nn.init.constant_(m.bias, 0)

        # Zero-initialize the last BN in each residual branch,
        # so that the residual branch starts with zeros, and each residual block behaves like an identity.
        # This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677
        if zero_init_residual:
            for m in self.modules():
                if isinstance(m, Bottleneck):
                    nn.init.constant_(m.bn3.weight, 0)
                elif isinstance(m, BasicBlock):
                    nn.init.constant_(m.bn2.weight, 0)

    def _make_layer(self, block, planes, blocks, stride=1, r=1, norm_layer=nn.BatchNorm2d):
        downsample = None
        if stride != 1 or self.inplanes != planes * block.expansion:
            downsample = nn.Sequential(
                conv1x1(self.inplanes, planes * block.expansion, stride),
                norm_layer(planes * block.expansion),
            )

        layers = []
        layers.append(block(self.inplanes, planes, r, stride, downsample))
        self.inplanes = planes * block.expansion
        for _ in range(1, blocks):
            layers.append(block(self.inplanes, planes))

        return nn.Sequential(*layers)

    def forward(self, x):
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)
        x = self.maxpool(x)

        x = self.layer1(x)
        x = self.layer2(x)
        x_downsampling_8 = x       ###(h/8,128)
        x = self.layer3(x)
        x_downsampling_16 = x      ###(h/16,256)
        x = self.layer4(x)         ###(h/32,512)

        return x, x_downsampling_8, x_downsampling_16

(2)用profile函数计算得到的macs值为 19.0GB

(3)自己手动计算FLOPs ≈ 20GB

        因此,在不统计卷积层bias计算次数的前提下,profile函数计算得到的macs值其实就是FLOPs。

        (PS:个人理解,欢迎批评纠正)

 

  • 1
    点赞
  • 11
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值