pytorch model flops and parameters count

1. flops caculate:

import re
def get_num_gen(gen):
        return sum(1 for x in gen)
def flops_layer(layer):
        """
        Calculate the number of flops for given a string information of layer.
        We extract only resonable numbers and use them.

        Args:
            layer (str) : example
                Linear (512 -> 1000)
                Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
                BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True)
        """
        # print(layer)
        idx_type_end = layer.find('(')
        type_name = layer[:idx_type_end]

        params = re.findall('[^a-z](\d+)', layer)
        flops = 1

        if layer.find('Linear') >= 0:
                C1 = int(params[0])
                C2 = int(params[1])
                flops = C1 * C2

        elif layer.find('Conv2d') >= 0:
                C1 = int(params[0])
                C2 = int(params[1])
                K1 = int(params[2])
                K2 = int(params[3])

                # image size
                H = 32
                W = 32
                flops = C1 * C2 * K1 * K2 * H * W

        #     print(type_name, flops)
        return flops
def calculate_flops(gen):
        """
        Calculate the flops given a generator of pytorch model.
        It only compute the flops of forward pass.

        Example:
            >>> net = torchvision.models.resnet18()
            >>> calculate_flops(net.children())
        """
        flops = 0;

        for child in gen:
                num_children = get_num_gen(child.children())

                # leaf node
                if num_children == 0:
                        flops += flops_layer(str(child))

                else:
                        flops += calculate_flops(child.children())

        return flops
flops = calculate_flops(net.children())
print(flops / 10**9, 'G')
exit()

2. parameters count calulate:

params = list(net.parameters())
k = 0
for i in params:
    l = 1
    print("该层的权重维度:" + str(list(i.size())))
    for j in i.size():
        l *= j
    print("该层参数量:" + str(l))
    k = k + l
print("总参数数量和:" + str(k))
PyTorch中,计算模型的参数(Parameter)和浮点运算数(FLOP)可以帮助我们了解模型的复杂度和资源消耗。以下是如何在PyTorch中计算Parameter和FLOP的步骤: ### 计算模型参数(Parameter) 要计算模型的参数数量,可以使用`torch.nn.Module`提供的`parameters()`方法。这个方法会返回模型中所有的参数,然后我们可以使用`numel()`方法计算每个参数的元素数量,并求和得到总参数数量。 ```python import torch.nn as nn def count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) # 假设我们有一个模型 model = nn.Sequential( nn.Linear(10, 50), nn.ReLU(), nn.Linear(50, 1) ) # 计算参数数量 num_params = count_parameters(model) print(f'The model has {num_params} trainable parameters') ``` ### 计算浮点运算数(FLOP) 计算FLOP稍微复杂一些,因为需要根据模型的架构手动计算每层的计算量。以下是一个简单的例子,展示了如何计算一个包含线性层和ReLU激活函数的模型的FLOP。 ```python import torch import torch.nn as nn def count_flops(model, input_size): flops = 0 input_shape = (1, *input_size) device = next(model.parameters()).device x = torch.randn(input_shape).to(device) hooks = [] def hook_fn(module, input, output): if isinstance(module, nn.Linear): out_features, in_features = module.weight.shape flops_per_sample = in_features * out_features flops += flops_per_sample * x.size(0) elif isinstance(module, nn.Conv2d): # 简单的卷积层FLOP计算 out_h = output.size(2) out_w = output.size(3) flops_per_sample = module.in_channels * module.out_channels * module.kernel_size[0] * module.kernel_size[1] * out_h * out_w flops += flops_per_sample * x.size(0) for m in model.modules(): if isinstance(m, (nn.Linear, nn.Conv2d)): hooks.append(m.register_forward_hook(hook_fn)) with torch.no_grad(): model(x) for hook in hooks: hook.remove() return flops # 假设我们有一个模型 model = nn.Sequential( nn.Linear(10, 50), nn.ReLU(), nn.Linear(50, 1) ) # 计算FLOP flops = count_flops(model, input_size=(10,)) print(f'The model requires {flops} FLOPs') ``` ### 总结 - **参数计算**:使用`model.parameters()`和`numel()`方法。 - **FLOP计算**:需要手动计算每层的计算量,并使用`register_forward_hook`方法注册钩子函数。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值