报错:
Expected 4-dimensional input for 4-dimensional weight 64 3 11 11, but got 3-dimensional input of size [3, 224, 224] instead
解决办法
np.array:
img = np.expand_dims(img,0)
tensor:
torch.unsqueeze(input, dim=0).float()
example
补充:
转自 https://blog.csdn.net/zhonglongshen/article/details/103478814
from torchvision import models, transforms
VGG = models.vgg16(pretrained=True)
feature = torch.nn.Sequential(*list(VGG.children())[:])
print(feature)
print('=============')
print(VGG._modules.keys())# 查看包含哪两部分
exit()