使用huggingface预训练模型进行分类任务

模型地址:chinese-roberta-wwm-ext

使用本地的下载好的chinese-roberta-wwm-ext模型,如果使用tensorflow,pip库中的tensorflow和tensorflow-gpu版本需要一致,不然会报下面这个错。

None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won’t be available and only tokenizers, configuration and file/data utilities can be used.

没有找到 PyTorch 和 TensorFlow >= 2.0

调用模型又报下面这个错。

ImportError:
AutoModelForMaskedLM requires the PyTorch library but it was not found in your environment. Checkout the instructions on the
installation page: https://pytorch.org/get-started/locally/ and follow the ones that match your environment.

找不到解决方法,只好转战pytorch。最终:

  • python:3.8
  • torch:1.12.0+cu113
  • torchvision:0.13.0+cu113
  • cuda:11.3

对于 pytorch_model.bin 的理解

pytorch_model.bin是一个已经训练好的预训练模型,包含了训练好的模型权重参数,可以继续训练,或者进行推理、评估。


使用Bert模型训练文本分类任务:这个方法使用AutoModelForMaskedLM加载模型,调用这部分代码的时候,loss值一直不变,没有找到原因。

基于 chinese-roberta-wwm-ext 微调

训练+验证

from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import logging
import torch
import torch.nn as nn
from torch.utils.data import DataLoader, RandomSampler, TensorDataset
import numpy as np
from tqdm import tqdm
import os
import csv


logging.set_verbosity_error()
os.environ["TOKENIZERS_PARALLELISM"] = "false"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
csv.field_size_limit(500 * 1024 * 1024)


model_dir = "chinese-roberta-wwm-ext"
train_file = "chinese-roberta-wwm-ext/data/train.csv"
val_file = "chinese-roberta-wwm-ext/data/val.csv"
max_length = 128
num_classes = 2
batch_size = 32
epoch = 2


class GenDateSet():
    def __init__(self, tokenizer, train_file, val_file, max_length=128, batch_size=32):
        self.train_file = train_file
        self.val_file = val_file
        self.max_length = max_length
        self.batch_size = batch_size
        self.tokenizer = tokenizer

    def gen_data(self, file):
        if not os.path.exists(file):
            raise Exception("数据集不存在")
        input_ids = []
        input_types = []
        input_masks = []
        labels = []
        with open(file, encoding='utf8') as f:
            data = csv.reader(f, delimiter=',')

            for index, item in enumerate(data):
                text = "".join(item[0])[0:-1]
                label = "".join(item[0])[-1]
                tokens = self.tokenizer(text, padding="max_length", truncation=True, max_length=self.max_length)
                input_id, types, masks = tokens['input_ids'], tokens['token_type_ids'], tokens['attention_mask']
                input_ids.append(input_id)
                input_types.append(types)
                input_masks.append(masks)
                y_ = label
                labels.append([y_])

                if index % 1000 == 0:
                    print('处理', index, '条数据')

        data_gen = TensorDataset(torch.LongTensor(np.array(input_ids)).to(device),
                                 torch.LongTensor(np.array(input_types)).to(device),
                                 torch.LongTensor(np.array(input_masks)).to(device),
                                 torch.LongTensor(np.array(labels).astype(float)).to(device))

        sampler = RandomSampler(data_gen)

        return DataLoader(data_gen, sampler=sampler, batch_size=self.batch_size)

    def gen_train_data(self):
        return self.gen_data(self.train_file)

    def gen_val_data(self):
        return self.gen_data(self.val_file)


def val(model, data):
    model.eval()
    val_loss = 0.0
    acc = 0
    for (input_id, types, masks, y) in tqdm(data):
        with torch.no_grad():
            y_ = model(input_id, token_type_ids=types, attention_mask=masks)
            logits = y_['logits']
        val_loss += nn.functional.cross_entropy(logits, y.squeeze())
        pred = logits.max(-1, keepdim=True)[1]
        acc += pred.eq(y.view_as(pred)).sum().item()
    val_loss /= len(data)
    return acc / len(data.dataset)


def main():

    tokenizer = AutoTokenizer.from_pretrained(model_dir)
    model = AutoModelForSequenceClassification.from_pretrained(model_dir, num_labels=num_classes)

    dateset = GenDateSet(tokenizer, train_file, val_file, max_length, batch_size)

    train_data = dateset.gen_train_data()
    val_data = dateset.gen_val_data()

    model.train()
    model = model.to(device)
    optimizer = torch.optim.AdamW(model.parameters(), lr=5e-5)

    best_acc = 0.0

    for epoch_index in range(epoch):
        batch_epoch = 0
        for (input_id, types, masks, y) in tqdm(train_data):

            outputs = model(input_id, token_type_ids=types, attention_mask=masks, labels=y)

            optimizer.zero_grad()

            loss = outputs.loss

            loss.backward()
            optimizer.step()
            batch_epoch += 1
            if batch_epoch % 10 == 0:
                print('Train Epoch:', epoch_index, ' , batch_epoch: ', batch_epoch, ' , loss = ', loss.item())

        acc = val(model, val_data)
        print('Train Epoch:', epoch_index, ' val acc = ', acc)

        if best_acc < acc:
            model.save_pretrained("us")
            tokenizer.save_pretrained("us")
            best_acc = acc


if __name__ == '__main__':
    main()

测试

import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import csv
import random


model_dir = "us"
file = "chinese-roberta-wwm-ext/data/test.csv"
num_classes = 2
max_length = 128


label_dict = {
    0: 'H',
    1: 'C'
}


def load_data():

    with open(file, encoding='utf8') as f:
        data = list(csv.reader(f, delimiter=','))

    random.shuffle(data)

    sentences = []
    targets = []
    for item in data:
        sentences.append("".join(item[0])[0:-1])
        targets.append("".join(item[0])[-1])
    return sentences, targets


def main():

    sentences, targets = load_data()
    tokenizer = AutoTokenizer.from_pretrained(model_dir)
    model = AutoModelForSequenceClassification.from_pretrained(model_dir, num_labels=num_classes)

    total = len(sentences)
    correct = 0

    for index, text in enumerate(sentences):
        encoded_input = tokenizer(text, padding="max_length", truncation=True, max_length=max_length)

        input_ids = torch.tensor([encoded_input['input_ids']])
        token_type_ids = torch.tensor([encoded_input['token_type_ids']])
        attention_mask = torch.tensor([encoded_input['attention_mask']])

        y_ = model(input_ids, token_type_ids=token_type_ids, attention_mask=attention_mask)
        output = y_['logits'][0]
        pred = output.max(-1, keepdim=True)[1][0].item()
        label = int(targets[index])
        if pred == label:
            correct += 1
        print('预测结果:{} 实际结果:{}'.format(label_dict[pred], label_dict[label]))

    print(correct/total)


if __name__ == '__main__':
    main()


报错:


1. 分词 报错

TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]

有方法提出将分词方式 AutoTokenizer.from_pretrained(MODEL_PATH) 改为BertTokenizer.from_pretrained(MODEL_PATH),改完依旧报错。

其实是数据出了问题,只有标签,没有文本,修改一下这些数据就好了。

数据

2. np 转 tensor 出错

TypeError: can’t convert np.ndarray of type numpy.str_. The only supported types are: float64, float32, float16, complex64, complex128, int64, int32, int16, int8, uint8, and bool.

当把np转换成tensor时,由于读入的numpy数组里的元素是object类型,无法将这种类型转换成tensor。将numpy数组进行强制类型转换成float类型或者tensor支持的类型。

torch.LongTensor(np.array(labels))

改为

torch.LongTensor(np.array(labels).astype(float))

Pytorch 踩坑: TypeError: can‘t convert np.ndarray of type numpy.object_.


3. 模型和数据不在同一个device

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking arugment for argument index in method wrapper_index_select)

我出现这个错误是因为模型在GPU上,数据在CPU上。在数据转化为Tensor类型时,添加.to(device)

        data_gen = TensorDataset(torch.LongTensor(np.array(input_ids)).to(device),
                                 torch.LongTensor(np.array(input_types)).to(device),
                                 torch.LongTensor(np.array(input_masks)).to(device),
                                 torch.LongTensor(np.array(labels).astype(float)).to(device))

4. cuda和pytorch版本不匹配

RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

安装的cuda和pytorch版本不匹配。使用print(torch.version.cuda) 得到的版本是10.2,使用nvcc -V得到的版本是11.2。由于cuda11.2没有对应的pytorch版本,再加上之前安装了cuda11.3,所以直接重新安装对应cuda11.3torchtorchvision

在这里查找torch和torchvision对应版本
在这里下载whl文件安装更快

再使用 vi ~/.bashrc,将里面的11.2替换为11.3


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值