我正在研究resnet,但是不想要最后的全连接层。而是希望输出保持 2,3,224,224,这样,batch,通道,长,宽,的格式。所以找到了这个链接,里面有详细的答案:
https://discuss.pytorch.org/t/how-to-delete-layer-in-pretrained-model/17648/40
下面是我的代码:
import torch
import torch.nn as nn
import torchvision
model = torchvision.models.resnet18()
print(model) # 打印出来看看。直接用.summary还不行呢。
model = nn.Sequential(*list(model.children())[:-2]) # !!这样可以截取其中的一部分。否则另一种方法是,如链接里的,设置一个Identity的class。用这个代替model中的,比如,layer4,或者fc层(这个名字从print的结果找到的),但是那样无法取代最后的flattern层,肯定会展平,与其再reshape回去,不如直接取子集。
import urllib
url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg")
try: urllib.URLopener().retrieve(url, filename)
except: urllib.request.urlretrieve(url, filename)
from PIL import Image
from torchvision import transforms
input_image = Image.open(filename)
preprocess = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
input_tensor = preprocess(input_image)
input_batch = input_tensor.unsqueeze(0) # create a mini-batch as expected by the model
# move the input and model to GPU for speed if available
if torch.cuda.is_available():
input_batch = input_batch.to('cuda')
model.to('cuda')
model.fc = nn.Sequential()
with torch.no_grad():
print(input_batch.shape)
output = model(input_batch)
print(output.shape)
# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes
# print(output[0])
# The output has unnormalized scores. To get probabilities, you can run a softmax on it.
probabilities = torch.nn.functional.softmax(output[0], dim=0)
# print(probabilities)