在写完model.py后往往会想知道model.py参数有多少个,可以通过下面这个方法得到。以后测试model.py推荐啊使用这个。
def get_parameter_number(model):
total_num = sum(p.numel() for p in model.parameters())
trainable_num = sum(p.numel() for p in model.parameters() if p.requires_grad)
return {'Total': total_num, 'Trainable': trainable_num}
if __name__=="__main__":
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = AlexNet().to(device)
# 打印模型
print(model)
# 查看nvidia-smi -l 2中显存占用
# 要是要输出Estimated Total Size(MB), 则你需要加上你输入图片相同的shape, dtype的torch.Tensor. 你需要注意的是你传入的torch.Tensor的类型应该和weight的类型一致.
# 这个时候在 pass 处断点, 查看 nvidia 显存大小
# 计算方法: 2 * 中间层feature map, 可见: `https://blog.csdn.net/csdnxiekai/article/details/110517751`
signal = torch.zeros([1, 3, 224, 224], dtype=torch.float32).to(device) # [batchsize, channel, height, width]
output = model(signal)
pass
# 计算参数个数
# 参考: `https://blog.csdn.net/qq_41979513/article/details/102369396`
params_dict = get_parameter_number(model)
# 计算参数参数量 torch.float32 的参数为例
size = params_dict["Trainable"] * 4 / (1024 * 1024)
print("the total number of params is {}, trainable params is {}.\nthe size of params is {} MB".format(params_dict["Total"], params_dict["Trainable"], size))
pass