权重衰减
L2正则化的目的就是为了让权重衰减到更小的值,在一定程度上减少模型过拟合的问题,所以权重衰减也叫L2正则化。
Bert中的权重衰减
并不是所有的权重参数都需要衰减,比如bias,和LayerNorm.weight就不需要衰减。
from transformers import BertConfig, BertForSequenceClassification, AdamW
import torch
import torch.nn as nn
# 使用GPU
# 通过model.to(device)的方式使用
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
config = BertConfig.from_pretrained("bert-base-uncased", num_labels=2, hidden_dropout_prob=0.2)
model = BertForSequenceClassification.from_pretrained("bert-base-uncased", config=config)
model.to(device)
no_decay = ['bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], 'weight_decay': 1e-2},
{'params': [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}
]
# 对应optimizer_grouped_parameters中的第一个dict,这里面的参数需要权重衰减
need_decay = []
for n, p in model.named_parameters():
if not any(nd in n for nd in no_decay):
need_decay.append(p)
# 对应optimizer_grouped_parameters中的第二个dict,这里面的参数不需要权重衰减
not_decay = []
for n, p in model.named_parameters():
if any(nd in n for nd in no_decay):
not_decay.append(p)
# AdamW是实现了权重衰减的优化器
optimizer = AdamW(optimizer_grouped_parameters, lr=1e-5)
criterion = nn.CrossEntropyLoss()
print("Done")