KBQA学习记录-NER训练及验证

目录

1.前提

2.模型训练整体流程

3.模型验证函数

4.打印结果并保存模型


1.前提

我们已经准备好了训练和验证数据,这些数据原来是文本,之后被转为了id,又加了padding,构造成为了特征,又通过类存储起来,实例化之后,通过类.input_ids,类.token_type_ids等方式,被调用,并存在了列表中,转为了tensor,最后通过TensorDataset将4个特征保存了起来。而这个dataset,就将用于训练模型。

2.模型训练整体流程

通过DataLoader获取数据,选择优化器并定义好预热,以及梯度累积的步数。

训练:将参数提取出来,做成词典,传入模型,当梯度累积步数达到设定值时,优化器和预热器都进行一次迭代,并打印log损失。

在指定的step进行验证。代码如下:

def trains(args,train_dataset,eval_dataset,model):

    train_sampler = RandomSampler(train_dataset)
    train_dataloader = DataLoader(train_dataset, sampler=train_sampler, batch_size=args.train_batch_size)

    t_total = len(train_dataloader) // args.gradient_accumulation_steps * args.num_train_epochs

    no_decay = ['bias', 'LayerNorm.weight','transitions']
    optimizer_grouped_parameters = [
        {'params': [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
         'weight_decay': args.weight_decay},
        {'params': [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}
    ]
    optimizer = AdamW(optimizer_grouped_parameters,lr=args.learning_rate,eps=args.adam_epsilon)

    scheduler = WarmupLinearSchedule(optimizer, warmup_steps=args.warmup_steps, t_total=t_total)
    logger.info("***** Running training *****")
    logger.info("  Num examples = %d", len(train_dataset))
    logger.info("  Num Epochs = %d", args.num_train_epochs)
    logger.info("  Gradient Accumulation steps = %d", args.gradient_accumulation_steps)
    logger.info("  Total optimization steps = %d", t_total)

    global_step = 0
    tr_loss, logging_loss = 0.0, 0.0
    model.zero_grad()
    train_iterator = trange(int(args.num_train_epochs), desc="Epoch")
    set_seed(args)
    best_f1 = 0.
    for _ in train_iterator:
        epoch_iterator = tqdm(train_dataloader, desc="Iteration")
        for step,batch in enumerate(epoch_iterator):
            batch = tuple(t.to(args.device) for t in batch)
            inputs = {'input_ids':batch[0],
                      'attention_mask':batch[1],
                      'token_type_ids':batch[2],
                      'tags':batch[3],
                      'decode':True
            }
            outputs = model(**inputs)
            loss,pre_tag = outputs[0], outputs[1]

            if args.gradient_accumulation_steps > 1:
                loss = loss / args.gradient_accumulation_steps
            loss.backward()
            torch.nn.utils.clip_grad_norm_(model.parameters(),args.max_grad_norm)
            logging_loss += loss.item()
            tr_loss += loss.item()
            if 0 == (step + 1) % args.gradient_accumulation_steps:
                optimizer.step()
                scheduler.step()
                model.zero_grad()
                global_step += 1
                logger.info("EPOCH = [%d/%d] global_step = %d   loss = %f",_+1,args.num_train_epochs,global_step,
                            logging_loss)
                logging_loss = 0.0

                # if (global_step < 100 and global_step % 10 == 0) or (global_step % 50 == 0):
                # 每 相隔 100步,评估一次
                if global_step % 100 == 0:
                    best_f1 = evaluate_and_save_model(args,model,eval_dataset,_,global_step,best_f1)

    # 最后循环结束 再评估一次
    best_f1 = evaluate_and_save_model(args, model, eval_dataset,_,global_step, best_f1)

3.模型验证函数

在验证的同时,保存最佳模型以及参数。

从验证数据集中取出数据,将模型调成验证模式,输入数据,就可以得到预测结果。

将所有的预测结果打平合并,和真实结果导入classification report即可得到precision, recall, f1

最后再把模型转回训练模式

def evaluate(args, model, eval_dataset):

    eval_output_dirs = args.output_dir
    if not os.path.exists(eval_output_dirs):
        os.makedirs(eval_output_dirs)
    eval_sampler = SequentialSampler(eval_dataset)
    eval_dataloader = DataLoader(eval_dataset, sampler=eval_sampler,
                                 batch_size=args.eval_batch_size)

    logger.info("***** Running evaluation *****")
    logger.info("  Num examples = %d", len(eval_dataset))
    logger.info("  Batch size = %d", args.eval_batch_size)


    loss = []
    real_token_label = []
    pred_token_label = []
    for batch in tqdm(eval_dataloader, desc="Evaluating"):
        model.eval()
        batch = tuple(t.to(args.device) for t in batch)
        with torch.no_grad():
            inputs = {'input_ids':batch[0],
                      'attention_mask':batch[1],
                      'token_type_ids':batch[2],
                      'tags':batch[3],
                      'decode':True,
                      'reduction':'none'
            }
            outputs = model(**inputs)
            # temp_eval_loss shape: (batch_size)
            # temp_pred : list[list[int]] 长度不齐
            temp_eval_loss, temp_pred = outputs[0], outputs[1]

            loss.extend(temp_eval_loss.tolist())
            pred_token_label.extend(temp_pred)
            real_token_label.extend(statistical_real_sentences(batch[3],batch[1],temp_pred))


    loss = np.array(loss).mean()
    real_token_label = np.array(flatten(real_token_label))
    pred_token_label = np.array(flatten(pred_token_label))
    assert real_token_label.shape == pred_token_label.shape
    ret = classification_report(y_true = real_token_label,y_pred = pred_token_label,output_dict = True)
    model.train()
    return ret

4.打印结果并保存模型

得到的结果中,有三种数据,是因为我们有三种标签,开头定义了分别是["O", "B-LOC", "I-LOC"],因此,只取后两个即可。

def evaluate_and_save_model(args,model,eval_dataset,epoch,global_step,best_f1):
    ret = evaluate(args, model, eval_dataset)

    precision_b = ret['1']['precision']
    recall_b = ret['1']['recall']
    f1_b = ret['1']['f1-score']
    support_b = ret['1']['support']

    precision_i = ret['2']['precision']
    recall_i = ret['2']['recall']
    f1_i = ret['2']['f1-score']
    support_i = ret['2']['support']

    weight_b = support_b / (support_b + support_i)
    weight_i = 1 - weight_b

    avg_precision = precision_b * weight_b + precision_i * weight_i
    avg_recall = recall_b * weight_b + recall_i * weight_i
    avg_f1 = f1_b * weight_b + f1_i * weight_i

    all_avg_precision = ret['macro avg']['precision']
    all_avg_recall = ret['macro avg']['recall']
    all_avg_f1 = ret['macro avg']['f1-score']

    logger.info("Evaluating EPOCH = [%d/%d] global_step = %d", epoch+1,args.num_train_epochs,global_step)
    logger.info("B-LOC precision = %f recall = %f  f1 = %f support = %d", precision_b, recall_b, f1_b,
                support_b)
    logger.info("I-LOC precision = %f recall = %f  f1 = %f support = %d", precision_i, recall_i, f1_i,
                support_i)

    logger.info("attention AVG:precision = %f recall = %f  f1 = %f ", avg_precision, avg_recall,
                avg_f1)
    logger.info("all AVG:precision = %f recall = %f  f1 = %f ", all_avg_precision, all_avg_recall,
                all_avg_f1)

    if avg_f1 > best_f1:
        best_f1 = avg_f1
        torch.save(model.state_dict(), os.path.join(args.output_dir, "best_ner.bin"))
        logging.info("save the best model %s,avg_f1= %f", os.path.join(args.output_dir, "best_bert.bin"),
                     best_f1)
    # 返回出去,用于更新外面的 最佳值
    return best_f1
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值