模型冻结对应层参数freeze

目的

在做迁移学习或者自监督学习时,一般先预训练一个模型,再将该模型参数作为目标任务模型的初始化参数,或者直接freeze预训练模型,不再更新其参数。

方法

from collections.abc import Iterable
 
 
def set_freeze_by_names(model, layer_names, freeze=True):
    if not isinstance(layer_names, Iterable):
        layer_names = [layer_names]
    for name, child in model.named_children():
        if name not in layer_names:
            continue
        for param in child.parameters():
            #print(param.name)
            param.requires_grad = not freeze
 
 
def freeze_by_names(model, layer_names):
    set_freeze_by_names(model, layer_names, True)
 
 
def unfreeze_by_names(model, layer_names):
    set_freeze_by_names(model, layer_names, False)
 
 
def set_freeze_by_idxs(model, idxs, freeze=True):
    if not isinstance(idxs, Iterable):
        idxs = [idxs]
    num_child = len(list(model.children()))
    idxs = tuple(map(lambda idx: num_child + idx if idx < 0 else idx, idxs))
    for idx, child in enumerate(model.children()):
        if idx not in idxs:
            continue
        for param in child.parameters():
            param.requires_grad = not freeze
 
 
def freeze_by_idxs(model, idxs):
    set_freeze_by_idxs(model, idxs, True)
 
 
def unfreeze_by_idxs(model, idxs):
    set_freeze_by_idxs(model, idxs, False)

我的使用

# select params to freeze
        # print(all_model_list[:5])
for name, child in self.backbone.named_children():
	if name in all_model_list:
    	for param in child.parameters():
	        # print(param.name)
	        param.requires_grad = False

思路

  1. 读取整个模型
  2. 获取对应的子模型名字和child
  3. 将对应child参数冻结(child.parameters() 中param转化为 requires_grad = False)

参考资料

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值