Reference:https://github.com/649453932/Bert-Chinese-Text-Classification-Pytorch
模型代码学习-CLS文本分类-Bert-Chinese-Text-Classification-Pytorch代码学习-训练并测试过程
baseDir: Bert-Chinese-Text-Classification-Pytorch/
目录
def init_network(model, method='xavier', exclude='embedding', seed=123):
def train(config, model, train_iter, dev_iter, test_iter):
def evaluate(config, model, data_iter, test=False):
def test(config, model, test_iter):
./train_eval.py学习
全局
- 在import来看,从sklearn里import了metrics,或许是测试过程中使用?->在测试过程中进行了使用,看起来加入normalize=False后是统计有几个对的,不加入后就是相较于整体个数的百分比
>>> from sklearn.metrics import accuracy_score
>>> y_pred = [0,0,0,2,1,3,4,5]
>>> y_true = [0,0,0,6,6,6,6,6]
>>> accuracy_score(y_true, y_pred)
0.375
>>> accuracy_score(y_true, y_pred, normalize=False)
3
- import了pytorch_pretrained_bert.optimization中的BertAdam,BERT版本具有权值衰减固定、预热和学习速率线性衰减的Adam Optimizer。
# coding: UTF-8
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from sklearn import metrics
import time
from utils import get_time_dif
from pytorch_pretrained_bert.optimization import BertAdam
def init_network(model, method='xavier', exclude='embedding', seed=123):
- 这个函数在哪里被调用了,怎么感觉没有找到->似乎没有,暂时忽略了
- 从入参来看,seed=123该怎么理解,似乎没有用到?->所有seed字段的目标似乎都是为了结果的“可复现”,根据讨论后,在同一台机器上如果使用完全相同的seed,这样可能会在各类参数随机初始化的过程中成为一定的“随机定值”,使得结果可复现。但是讨论后认为在一些实验中如果不同机器使用了相同的seed,可能也没有效果。
# 权重初始化,默认xavier
def init_network(model, method='xavier', exclude='embedding', seed=123):
for name, w in model.named_parameters():
if exclude not in name:
if len(w.size()) < 2:
continue
if 'weight' in name:
if method == 'xavier':
nn.init.xavier_normal_(w)
elif method == 'kaiming':
nn.init.kaiming_normal_(w)
else:
nn.init.normal_(w)
elif 'bias' in name:
nn.init.constant_(w, 0)
else:
pass
def train(config, model, train_iter, dev_iter, test_iter):
- 入参是config,模型model,还有train,valid,test三个数据集的iter
- 当前时间作为开始时间
- model.train怎么理解,追溯传参来源run.py model = x.Model(config).to(config.device) ,model_name = args.model # bert x = import_module('models.' + model_name) config = x.Config(dataset),所以这里的model.train是否是来自pretrain bert的.train()方法?->目前来看model.train()是把model切换到train的状态,与model.eval()把模型切换到evaluate状态相对应,由于dropout等问题,模型在训练和测试过程中可能需要具有不同的状态(例如在验证、测试过程中不希望不同次的验证、测试有不同的结果,所以需要关闭dropout)
- 所有config内容来自于./models/bert.py中,如果要对超参进行调整,在bert.py进行调整即可
- 以下关于optimizer的设置怎么理解->这里的warmup可以理解为让学习率先增大,后减小,或许这样可以尽快锁定一个区间,从而在这个区间来细化优化,另注,在讨论后认为在fine-tune过程中学习率不宜过大,要稍微小写
- param_optimizer = list(model.named_parameters())
- no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight']
- optimizer_grouped_parameters = [{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay': 0.01}, {'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}]
- optimizer = BertAdam(optimizer_grouped_parameters,
lr=config.learning_rate,
warmup=0.05,
t_total=len(train_iter) * config.num_epochs) - total_batch参数记录进行到了多少的batch
- 初始化在验证集上的loss为inf:dev_best_loss = float('inf')
- last_import 初始化为0,记录上次在验证集上loss下降的batch数(训练是一个batch一个batch的,当所有数据组合完成训练后,会是一个Epoch)
- flag = False 记录是否很久没有效果提升,可能会被用作break用
- 又调用了一次model.train(),这里怎么理解,bert的.train()是什么效果->切换为train状态,和.eval()对应出现
- ※开始config.num_epochs个epoch的循环,这里循环的时候直接for ... in ..train_iter,所以是否就像之前认为的train_iter是一个可迭代对象?trains是(x, seq_len, mask)的tuple组合,这就被看做是输入模型的x,在模型的那篇博客中,forward函数如下context = x[0] # 输入的句子 mask = x[2] # 对padding部分进行mask,和句子一个size,padding部分用0表示,如:[1, 1, 1, 1, 0, 0]
_, pooled = self.bert(context, attention_mask=mask, output_all_encoded_layers=False) out = self.fc(pooled) 所以这里是一个拆分 - model.zero_grad() 虽然是标准写法,不过该怎么理解一下->pytorch中的梯度是累积的,但是以batch来看每个batch都需要清空一下。
- loss = F.cross_entropy(outputs, labels),loss.backward(),交叉熵损失函数,这里的loss.backward()是否就理解为自动梯度反向传播,然后由于模型中的全程允许梯度传递,所以不存在frozen的情况?为什么能把output和labels直接比交叉熵损失函数?->打印输出output和labels进行查看,从打印结果来看似乎想起,对于softmax和交叉熵一类的损失函数,需要的是第几个labels是正确的!
outputs tensor([[ 0.3079, 0.1015, -0.5626, ..., -0.0208, 0.2517, 0.1984],
[-0.1730, 0.1862, -0.6964, ..., -0.2710, 0.4122, 0.3804],
[-0.2523, 0.0576, -0.1686, ..., -0.2864, 0.3397, 0.0802],
...,
[ 0.1749, -0.2050, -0.2825, ..., -0.5576, -0.0727, 0.1467],
[ 0.1107, -0.3328, -0.5910, ..., -0.5746, -0.1585, 0.1143],
[ 0.3721, -0.0540, -0.5997, ..., -0.2982, 0.0122, 0.4152]],
device='cuda:0', grad_fn=<AddmmBackward>)
outputs size torch.Size([128, 10])
labels tensor([7, 5, 8, 1, 9, 9, 0, 6, 7, 2, 9, 9, 2, 3, 9, 3, 7, 0, 5, 6, 1, 7, 6, 5,
1, 4, 0, 4, 0, 8, 9, 0, 9, 9, 0, 4, 4, 7, 1, 8, 3, 6, 9, 3, 1, 6, 7, 7,
5, 3, 6, 0, 7, 9, 2, 8, 5, 6, 7, 6, 6, 6, 7, 0, 0, 7, 2, 3, 6, 6, 3, 5,
5, 9, 4, 1, 0, 8, 5, 4, 7, 4, 2, 3, 1, 4, 3, 3, 7, 8, 3, 3, 1, 9, 5, 5,
1, 4, 5, 2, 7, 3, 3, 0, 6, 5, 8, 8, 4, 1, 8, 3, 0, 2, 8, 5, 6, 4, 0, 6,
4, 0, 3, 6, 3, 3, 3, 7], device='cuda:0')
labels len 128
- 分析F.cross_entropy(outputs, labels)代码:https://blog.csdn.net/CuriousLiu/article/details/109995539 见此篇博客中
- optimizer.step() 怎么理解:所有的optimizer都实现了step()方法,这个方法会更新所有的参数。一旦梯度被如backward()之类的函数计算好后,我们就可以调用这个函数,是否理解为在这里进行了optimizer的处理->默认一种写法,个人感觉可以理解为对optimizer进行更新
- 每100轮输出一下在训练集和验证集上的效果(提到训练集和验证集,过拟合问题)
- true = labels.data.cpu()、predic = torch.max(outputs.data, 1)[1].cpu() 这两句话需要打印查看,看起来是调用labels所处类的中的一个data参数,但是为什么要放在cpu上,另外使用torch.max怎么理解,这里需要print labels和.data等进行查看->如下图所示,主要是取label后,查看是否对应使用的
true tensor([3, 4, 1, 7, 5, 5, 9, 1, 8, 4, 3, 7, 5, 2, 1, 8, 1, 1, 8, 4, 4, 6, 7, 1,
9, 4, 2, 9, 4, 2, 2, 9, 8, 9, 1, 3, 9, 5, 9, 6, 7, 2, 9, 5, 9, 4, 5, 6,
8, 1, 2, 1, 4, 0, 5, 4, 9, 6, 5, 5, 2, 4, 5, 5, 7, 8, 6, 7, 7, 2, 9, 0,
4, 6, 7, 2, 9, 7, 9, 0, 2, 9, 9, 4, 9, 0, 0, 4, 1, 2, 5, 5, 7, 0, 5, 9,
5, 3, 4, 6, 8, 3, 5, 9, 3, 9, 4, 9, 5, 4, 6, 2, 3, 6, 7, 4, 6, 2, 2, 2,
0, 1, 6, 4, 4, 2, 2, 3])
——————————————————————————————————————————————————————————————————————————————————
predict tensor([8, 5, 5, 0, 5, 1, 5, 5, 8, 5, 0, 5, 8, 5, 5, 5, 9, 5, 5, 0, 0, 6, 5, 9,
4, 8, 5, 5, 5, 5, 0, 5, 5, 5, 5, 6, 5, 5, 5, 9, 5, 5, 5, 5, 5, 5, 5, 5,
5, 9, 5, 5, 5, 0, 5, 5, 5, 5, 5, 0, 5, 5, 5, 5, 5, 6, 5, 0, 0, 5, 5, 5,
5, 0, 5, 5, 5, 5, 5, 5, 5, 5, 5, 1, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
0, 8, 5, 5, 5, 5, 5, 5, 0, 5, 5, 5, 5, 5, 6, 5, 0, 5, 5, 5, 5, 5, 1, 5,
0, 5, 5, 5, 5, 8, 0, 5])
——————————————————————————————————————————————————————————————————————————————————
torch.max(outputs.data, 1) torch.return_types.max(
values=tensor([0.5065, 0.7543, 0.6870, 0.5797, 0.8060, 0.5807, 0.8916, 0.8384, 0.6661,
0.7025, 0.6533, 0.5792, 0.4674, 0.4923, 0.7330, 0.6329, 0.7567, 0.8452,
0.5539, 0.5508, 0.8430, 0.7644, 0.4222, 0.6187, 0.4145, 0.4590, 0.6177,
0.7669, 0.7348, 0.7471, 0.5506, 0.5542, 0.8766, 0.7319, 0.8065, 0.7228,
0.5451, 0.9202, 0.7277, 0.3017, 0.6730, 0.5296, 0.8899, 0.9897, 0.7398,
0.6049, 0.7202, 0.6861, 0.6422, 0.5075, 0.8285, 0.6734, 0.7960, 0.6078,
0.6625, 0.6545, 0.7238, 0.6220, 0.6018, 0.8207, 0.9552, 0.7145, 0.7219,
0.7507, 0.6705, 0.4326, 0.6819, 0.4687, 0.8995, 0.6956, 0.5216, 0.6844,
0.6044, 0.5092, 0.5973, 0.6014, 0.9122, 0.7713, 0.8200, 0.7941, 0.6144,
0.5310, 0.7001, 0.3465, 0.5593, 0.4223, 0.6370, 0.6482, 0.7080, 0.6428,
0.7696, 0.8263, 0.5839, 0.7708, 0.7660, 0.8303, 0.7790, 0.6033, 0.4704,
0.7534, 0.6832, 0.5292, 0.8298, 0.6661, 0.5930, 0.6637, 0.5390, 1.1338,
0.9344, 0.2917, 0.4034, 0.8946, 0.6636, 0.4957, 0.8308, 0.9687, 0.6173,
0.7422, 0.5396, 0.6783, 0.6139, 0.8782, 0.9697, 0.8204, 0.5765, 0.3932,
0.8845, 0.7806], device='cuda:0'),
indices=tensor([8, 5, 5, 0, 5, 1, 5, 5, 8, 5, 0, 5, 8, 5, 5, 5, 9, 5, 5, 0, 0, 6, 5, 9,
4, 8, 5, 5, 5, 5, 0, 5, 5, 5, 5, 6, 5, 5, 5, 9, 5, 5, 5, 5, 5, 5, 5, 5,
5, 9, 5, 5, 5, 0, 5, 5, 5, 5, 5, 0, 5, 5, 5, 5, 5, 6, 5, 0, 0, 5, 5, 5,
5, 0, 5, 5, 5, 5, 5, 5, 5, 5, 5, 1, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
0, 8, 5, 5, 5, 5, 5, 5, 0, 5, 5, 5, 5, 5, 6, 5, 0, 5, 5, 5, 5, 5, 1, 5,
0, 5, 5, 5, 5, 8, 0, 5], device='cuda:0'))
- 从sklearn import了metrics用作度量计算?metrics.accuracy_score(true, predict)也需要进行查看,reference:https://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html
>>> from sklearn.metrics import accuracy_score
>>> y_pred = [0, 2, 1, 3]
>>> y_true = [0, 1, 2, 3]
>>> accuracy_score(y_true, y_pred)
0.5
>>> accuracy_score(y_true, y_pred, normalize=False)
2
- 调用evaluate方法得到在验证机上的acc和loss,这里evaluate方法见之后补充
- 为什么在total_batch%100 == 0的时候调用model.train(),调用model.train是在进行什么操作? 因为这里的model对应的应该是bert的那个pretrain?->在evaluate中切换到了eval状态,现在需要切换回train状态
- 最终调用一个test()来在最终的测试集上进行测试,test方法见之后补充
def train(config, model, train_iter, dev_iter, test_iter):
start_time = time.time()
model.train()
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}]
# optimizer = torch.optim.Adam(model.parameters(), lr=config.learning_rate)
optimizer = BertAdam(optimizer_grouped_parameters,
lr=config.learning_rate,
warmup=0.05,
t_total=len(train_iter) * config.num_epochs)
total_batch = 0 # 记录进行到多少batch
dev_best_loss = float('inf')
last_improve = 0 # 记录上次验证集loss下降的batch数
flag = False # 记录是否很久没有效果提升
model.train()
for epoch in range(config.num_epochs):
print('Epoch [{}/{}]'.format(epoch + 1, config.num_epochs))
for i, (trains, labels) in enumerate(train_iter):
outputs = model(trains)
model.zero_grad()
loss = F.cross_entropy(outputs, labels)
loss.backward()
optimizer.step()
if total_batch % 100 == 0:
# 每多少轮输出在训练集和验证集上的效果
true = labels.data.cpu()
predic = torch.max(outputs.data, 1)[1].cpu()
train_acc = metrics.accuracy_score(true, predic)
dev_acc, dev_loss = evaluate(config, model, dev_iter)
if dev_loss < dev_best_loss:
dev_best_loss = dev_loss
torch.save(model.state_dict(), config.save_path)
improve = '*'
last_improve = total_batch
else:
improve = ''
time_dif = get_time_dif(start_time)
msg = 'Iter: {0:>6}, Train Loss: {1:>5.2}, Train Acc: {2:>6.2%}, Val Loss: {3:>5.2}, Val Acc: {4:>6.2%}, Time: {5} {6}'
print(msg.format(total_batch, loss.item(), train_acc, dev_loss, dev_acc, time_dif, improve))
model.train()
total_batch += 1
if total_batch - last_improve > config.require_improvement:
# 验证集loss超过1000batch没下降,结束训练
print("No optimization for a long time, auto-stopping...")
flag = True
break
if flag:
break
test(config, model, test_iter)
def evaluate(config, model, data_iter, test=False):
- evaluate在可被用来在训练集,验证集上进行数据验证,在train函数中进行了调用
- model.eval()如何理解,这里的model似乎就是来自于pretrain的bert这个model?所以在这里的.eval()如何理解
- 初始化两个空的numpy arr
- 为什么with torch.no_grad()->torch.no_grad()是一个上下文管理器,被该语句 wrap 起来的部分将不会track 梯度,所以看起来在evaluate的时候,需要抑制模型的参数更新过程
- test是evalutate中传入的参数,在训练过程的test传为False,在测试过程的test传为True
def evaluate(config, model, data_iter, test=False):
model.eval()
loss_total = 0
predict_all = np.array([], dtype=int)
labels_all = np.array([], dtype=int)
with torch.no_grad():
for texts, labels in data_iter:
outputs = model(texts)
loss = F.cross_entropy(outputs, labels)
loss_total += loss
labels = labels.data.cpu().numpy()
predic = torch.max(outputs.data, 1)[1].cpu().numpy()
labels_all = np.append(labels_all, labels)
predict_all = np.append(predict_all, predic)
acc = metrics.accuracy_score(labels_all, predict_all)
if test:
report = metrics.classification_report(labels_all, predict_all, target_names=config.class_list, digits=4)
confusion = metrics.confusion_matrix(labels_all, predict_all)
return acc, loss_total / len(data_iter), report, confusion
return acc, loss_total / len(data_iter)
def test(config, model, test_iter):
- 测试代码,调用了evaluate函数并将test传为True
- 这里也使用model.eval(),该怎么理解?
def test(config, model, test_iter):
# test
model.load_state_dict(torch.load(config.save_path))
model.eval()
start_time = time.time()
test_acc, test_loss, test_report, test_confusion = evaluate(config, model, test_iter, test=True)
msg = 'Test Loss: {0:>5.2}, Test Acc: {1:>6.2%}'
print(msg.format(test_loss, test_acc))
print("Precision, Recall and F1-Score...")
print(test_report)
print("Confusion Matrix...")
print(test_confusion)
time_dif = get_time_dif(start_time)
print("Time usage:", time_dif)
./run.py学习
- 整体比较好理解,就是作为运行的代码使用,但是关于其中几个seed字段还需要进一步理解。
# coding: UTF-8
import time
import torch
import numpy as np
from train_eval import train, init_network
from importlib import import_module
import argparse
from utils import build_dataset, build_iterator, get_time_dif
parser = argparse.ArgumentParser(description='Chinese Text Classification')
parser.add_argument('--model', type=str, required=True, help='choose a model: Bert, ERNIE')
args = parser.parse_args()
if __name__ == '__main__':
dataset = 'THUCNews' # 数据集
model_name = args.model # bert
x = import_module('models.' + model_name)
config = x.Config(dataset)
np.random.seed(1)
torch.manual_seed(1)
torch.cuda.manual_seed_all(1)
torch.backends.cudnn.deterministic = True # 保证每次结果一样
start_time = time.time()
print("Loading data...")
train_data, dev_data, test_data = build_dataset(config)
train_iter = build_iterator(train_data, config)
dev_iter = build_iterator(dev_data, config)
test_iter = build_iterator(test_data, config)
time_dif = get_time_dif(start_time)
print("Time usage:", time_dif)
# train
model = x.Model(config).to(config.device)
train(config, model, train_iter, dev_iter, test_iter)