![](https://img-blog.csdnimg.cn/20201014180756918.png?x-oss-process=image/resize,m_fixed,h_64,w_64)
FLOPs
那里,春暖花开
这个作者很懒,什么都没留下…
展开
-
计算nn.Linear的计算量FLOPs
import torchimport torch.nn as nnm = nn.Linear(20, 30)input = torch.randn(128, 3,20)output = m(input)print(output.size())flops = (torch.prod(torch.LongTensor(list(output.size()))) \ * input[0].size(1)).item()print((list(output.size())))pri原创 2020-06-13 17:35:05 · 3066 阅读 · 2 评论 -
pytorch中Conv2d的FLOPs的计算范例
import torchconv = torch.nn.Conv2d(1,8,(2,3))input = torch.rand(1,1,224,224) # batch,channel,width,heightoutput = conv(input)print(output.shape)bn = torch.nn.BatchNorm2d(8)l = [conv,bn]for module in l: class_name = str(module.__class__.__name__原创 2020-06-13 16:14:06 · 1148 阅读 · 1 评论