Pytorch提取预训练模型特定中间层的输出

如果是你自己构建的模型,那么可以再forward函数中,返回特定层的输出特征图。

下面是介绍针对预训练模型,获取指定层的输出的方法。

如果你只想得到模型最后全连接层之前的输出,那么只需要将最后一个全连接层去掉:

import torchvision
import torch
net = torchvision.models.resnet18(pretrained=False)
print("model ", net)
net.fc = nn.Sequential([])

当然,对于vgg19网络,如果你想获得vgg19, classifier子模块中第一个全连接层的输出,则可以只更改其classifier子模块。

import torchvision
import torch
net = models.vgg19_bn(pretrained=False).cuda()
net.classifier = nn.Sequential(*list(net.classifier.children())[:-6])       
# 只保留第一个全连接层, 输出特征为4096

接下来是一些通用方法:

方法1:

对于简单的模型,可以采用直接遍历子模块的方法

import torchvision
import torch
net = torchvision.models.resnet18(pretrained=False)
print("model ", net)
out = []
x = torch.randn(1, 3, 224, 224)
return_layer = "maxpool"
for name, module in net.named_children():
    print(name)
    # print(module)
    x = module(x)
    print(x.shape)
    if name == return_layer:
        out.append(x.data)
        break
print(out[0].shape)

该方法的缺点在于,只能得到其子模块的输出,而对于使用nn.Sequensial()中包含很多层的模型,无法获得其指定层的输出。

方法2: 

使用torchvison提供内置方法,参考:Pytorch获取中间层输出的几种方法 - 知乎 (zhihu.com)

该方法与方法1 存在一样的问题。 不能获得其子模块内部特定层的输出。

from collections import OrderedDict
 
import torch
from torch import nn
 
 
class IntermediateLayerGetter(nn.ModuleDict):
    """
    Module wrapper that returns intermediate layers from a model
    It has a strong assumption that the modules have been registered
    into the model in the same order as they are used.
    This means that one should **not** reuse the same nn.Module
    twice in the forward if you want this to work.
    Additionally, it is only able to query submodules that are directly
    assigned to the model. So if `model` is passed, `model.feature1` can
    be returned, but not `model.feature1.layer2`.
    Arguments:
        model (nn.Module): model on which we will extract the features
        return_layers (Dict[name, new_name]): a dict containing the names
            of the modules for which the activations will be returned as
            the key of the dict, and the value of the dict is the name
            of the returned activation (which the user can specify).
    """
    
    def __init__(self, model, return_layers):
        if not set(return_layers).issubset([name for name, _ in model.named_children()]):
            raise ValueError("return_layers are not present in model")
 
        orig_return_layers = return_layers
        return_layers = {k: v for k, v in return_layers.items()}
        layers = OrderedDict()
        for name, module in model.named_children():
            layers[name] = module
            if name in return_layers:
                del return_layers[name]
            if not return_layers:
                break
 
        super(IntermediateLayerGetter, self).__init__(layers)
        self.return_layers = orig_return_layers
 
    def forward(self, x):
        out = OrderedDict()
        for name, module in self.named_children():
            x = module(x)
            if name in self.return_layers:
                out_name = self.return_layers[name]
                out[out_name] = x
        return out

# example
m = torchvision.models.resnet18(pretrained=True)
# extract layer1 and layer3, giving as names `feat1` and feat2`
new_m = torchvision.models._utils.IntermediateLayerGetter(m,{'layer1': 'feat1', 'layer3': 'feat2'})
out = new_m(torch.rand(1, 3, 224, 224))
print([(k, v.shape) for k, v in out.items()])
# [('feat1', torch.Size([1, 64, 56, 56])), ('feat2', torch.Size([1, 256, 14, 14]))]

补充:

使用 create_feature_extractor方法,创建一个新的模块,该模块将给定模型中的中间节点作为字典返回,用户指定的键作为字符串,请求的输出作为值。

该方法比 IntermediateLayerGetter方法更通用, 不局限于获得模型第一层子模块的输出。因此推荐使用create_feature_extractor方法。

# Feature extraction with resnet
from torchvision.models.feature_extraction import create_feature_extractor
model = torchvision.models.resnet18()
# extract layer1 and layer3, giving as names `feat1` and feat2`
model = create_feature_extractor(
model, {'layer1': 'feat1', 'layer3': 'feat2'})
out = model(torch.rand(1, 3, 224, 224))
print([(k, v.shape) for k, v in out.items()])

 提取vgg16,features子模块下的特征层:

# vgg16
backbone = torchvision.models.vgg16_bn(pretrained=True)
# print(backbone)
backbone = create_feature_extractor(backbone, return_nodes={"features.42": "0"})        #“0”字典的key
out = backbone(torch.rand(1, 3, 224, 224))
print(out["0"].shape)

方法3:

使用hook函数,获取任意层的输出。

import torchvision
import torch
from torch import nn
from torchvision.models import resnet50, resnet18


resnet = resnet18()
print(resnet)

features_in_hook = []
features_out_hook = []
# 使用 hook 函数
def hook(module, fea_in, fea_out):
    features_in_hook.append(fea_in.data)         # 勾的是指定层的输入
    # 只取前向传播的数值
    features_out_hook.append(fea_out.data)      # 勾的是指定层的输出
    return None

layer_name = 'avgpool'
for (name, module) in resnet.named_modules():
    print(name)
    if name == layer_name:
        module.register_forward_hook(hook=hook)
# 测试
x = torch.randn(1, 3, 224, 224)
resnet(x)
# print(features_in_hook)  # 勾的是指定层的输入
print(features_out_hook[0].shape)  # 勾的是指定层的输出  # 1, 64, 56, 56
print(features_out_hook[0])

方法3的优点在于:

通过遍历resnet.named_modules()可以获取任意中间层的输入和输出。

比较通过方法2和方法3获得的指定层输出是否相等。,结果为True,说明两种方法获得的结果相同。

new_m = torchvision.models._utils.IntermediateLayerGetter(resnet, {'avgpool': "feat1"})
out = new_m(x)
print(out['feat1'].data)
# print([(k, v.shape) for k, v in out.items()])
print(torch.equal(features_out_hook[0], out['feat1'].data))    # True

补充:使用net._modules可以获得子模块内的层,但对于复杂模型,使用起来太过繁琐。

for name, module in resnet._modules['layer1']._modules.items():
    print(name)

补充:在预训练模型中添加自己的可学习模块

首先,利用模型源码搭建完整的预训练模型,并在此基础上加入自己设计的模块,得到新的模型new_model。随后将预训练模型中的权重加载进新的网络new_model,然后冻结预训练网络的权重,只训练新添加的模块。

from torchvision.models import resnet18
net = resnet18(pretrained=True)
pretrained_dict = net.state_dict()
model_dict = new_model.state_dict()
# 将pretrained_dict里不属于model_dict的键剔除掉
pretrained_dict =  {k: v for k, v in pretrained_dict.items() if k in model_dict  and v.shape ==model_dict[k].shape}
# 更新现有的model_dict
model_dict.update(pretrained_dict)
# 加载我们真正需要的state_dict
new_model.load_state_dict(model_dict)

# 冻结预训练权重
for name, param in new_model.named_modules():
    if name in pretrained_dict.keys():
        param.requires_grad = False
    else:
        param.requires_grad = True

冻结模型指定子模块的权重:

net = torchvision.models.vgg19_bn(pretrained=False)
for param in net.features.parameters():
    param.requires_grad=False

    # define optimizer
params = [p for p in net.parameters() if p.requires_grad]
optimizer = torch.optim.SGD(params, lr=0.005,
                                momentum=0.9, weight_decay=0.0005)

获取模型的子模块,并保存其权重:

import torchvision
import torch
from torch import nn
from torchvision.models import resnet50, resnet18

resnet = resnet18()
layer1 = resnet.get_submodule("layer1")
torch.save(layer1.state_dict(), './layer1.pth')
# 子模块载入相应权重
layer1.load_state_dict(torch.load("./layer1.pth"))

  • 26
    点赞
  • 97
    收藏
    觉得还不错? 一键收藏
  • 7
    评论
评论 7
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值