基于PaddleNLP的Skep预训练模型实现千言数据集——情感分析比赛

情感分析比赛:分数0.7943


近年来,大量的研究表明基于大型语料库的预训练模型(Pretrained Models, PTM)可以学习通用的语言表示,有利于下游NLP任务,同时能够避免从零开始训练模型。随着计算能力的发展,深度模型的出现(即 Transformer)和训练技巧的增强使得 PTM 不断发展,由浅变深。

情感预训练模型SKEP(Sentiment Knowledge Enhanced Pre-training for Sentiment Analysis)。SKEP利用情感知识增强预训练模型, 在14项中英情感分析典型任务上全面超越SOTA,此工作已经被ACL 2020录用。SKEP是百度研究团队提出的基于情感知识增强的情感预训练算法,此算法采用无监督方法自动挖掘情感知识,然后利用情感知识构建预训练目标,从而让机器学会理解情感语义。SKEP为各类情感分析任务提供统一且强大的情感语义表示。

论文地址:https://arxiv.org/abs/2005.05635
在这里插入图片描述
百度研究团队在三个典型情感分析任务,句子级情感分类(Sentence-level Sentiment Classification),评价目标级情感分类(Aspect-level Sentiment Classification)、观点抽取(Opinion Role Labeling),共计14个中英文数据上进一步验证了情感预训练模型SKEP的效果。

具体实验效果参考:https://github.com/baidu/Senta#skep

这里用飞桨的高层API快速搭建模型实现情感分析比赛的结果的提交。具体的原理和分析请参考『NLP打卡营』实践课5:文本情感分析。以下将分三部分:句子级情感分析(NLPCC14-SC,ChnSentiCorp);目标级情感分析(SE-ABSA16_PHNS,SE-ABSA16_CAME);以及观点抽取(COTE-BD,COTE-DP,COTE-MFW)。具体的数据集介绍,请见比赛链接

项目的使用非常简单,更改相应章节的data_name,并自己调整batch_size和epochs等以达到最佳的训练效果,并运行相应章节的所有代码即可得到对应数据集的预测结果。所有数据预测完成后,下载submission文件夹提交即可。

!pip install --upgrade paddlenlp -i https://pypi.org/simple 

1. 句子级情感分析

句子级情感分析是针对输入的一段话,判断其感情倾向,一般为积极(1)或消极(0)。

众所周知,人类自然语言中包含了丰富的情感色彩:表达人的情绪(如悲伤、快乐)、表达人的心情(如倦怠、忧郁)、表达人的喜好(如喜欢、讨厌)、表达人的个性特征和表达人的立场等等。情感分析在商品喜好、消费决策、舆情分析等场景中均有应用。利用机器自动分析这些情感倾向,不但有助于帮助企业了解消费者对其产品的感受,为产品改进提供依据;同时还有助于企业分析商业伙伴们的态度,以便更好地进行商业决策。

被人们所熟知的情感分析任务是将一段文本分类,如分为情感极性为正向负向其他的三分类问题:


情感分析任务
  • 正向: 表示正面积极的情感,如高兴,幸福,惊喜,期待等。
  • 负向: 表示负面消极的情感,如难过,伤心,愤怒,惊恐等。
  • 其他: 其他类型的情感。

实际上,以上熟悉的情感分析任务是句子级情感分析任务

情感分析任务还可以进一步分为句子级情感分析目标级情感分析等任务。

1.0 载入模型和Tokenizer

调用paddlenlp.transformers.SkepForTokenClassification.from_pretrained('skep_ernie_1.0_large_ch')方法只需指定想要使用的模型名称和文本分类的类别数即可完成定义模型网络。

PaddleNLP不仅支持Skep预训练模型,还支持BERT、RoBERTa、Electra等预训练模型。
下表汇总了目前PaddleNLP支持的各类预训练模型。用户可以使用PaddleNLP提供的模型,完成文本分类、序列标注、问答等任务。同时我们提供了22种预训练的参数权重供用户使用,其中包含了11种中文语言模型的预训练权重。

ModelTokenizerSupported TaskModel Name
BERTBertTokenizerBertModel
BertForQuestionAnswering
BertForSequenceClassification
BertForTokenClassification
bert-base-uncased
bert-large-uncased
bert-base-multilingual-uncased
bert-base-cased
bert-base-chinese
bert-base-multilingual-cased
bert-large-cased
bert-wwm-chinese
bert-wwm-ext-chinese
ERNIEErnieTokenizer
ErnieTinyTokenizer
ErnieModel
ErnieForQuestionAnswering
ErnieForSequenceClassification
ErnieForTokenClassification
ernie-1.0
ernie-tiny
ernie-2.0-en
ernie-2.0-large-en
RoBERTaRobertaTokenizerRobertaModel
RobertaForQuestionAnswering
RobertaForSequenceClassification
RobertaForTokenClassification
roberta-wwm-ext
roberta-wwm-ext-large
rbt3
rbtl3
ELECTRAElectraTokenizerElectraModel
ElectraForSequenceClassification
ElectraForTokenClassification
electra-small
electra-base
electra-large
chinese-electra-small
chinese-electra-base

注:其中中文的预训练模型有 bert-base-chinese, bert-wwm-chinese, bert-wwm-ext-chinese, ernie-1.0, ernie-tiny, roberta-wwm-ext, roberta-wwm-ext-large, rbt3, rbtl3, chinese-electra-base, chinese-electra-small 等。

更多预训练模型参考:https://github.com/PaddlePaddle/PaddleNLP/blob/develop/docs/model_zoo/transformers.rst
更多预训练模型fine-tune下游任务使用方法,请参考https://github.com/PaddlePaddle/PaddleNLP/tree/develop/examples

import paddlenlp
from paddlenlp.transformers import SkepForSequenceClassification, SkepTokenizer
# from paddlenlp.transformers import ErnieForSequenceClassification, ErnieTokenizer
# from paddlenlp.transformers import BertForSequenceClassification, BertTokenizer
print(paddlenlp.__version__)

1.1 数据处理

虽然一些数据集在PaddleNLP已存在,但是为了数据处理上的一致性,这里统一从上传的datasets中处理。对于PaddleNLP已存在的数据集,强烈建议直接用API调用,非常方便。

# 解压数据
!unzip -o datasets/ChnSentiCorp
!unzip -o datasets/NLPCC14-SC

数据内部结构解析:

ChnSentiCorp:

train: 
label		text_a
0		房间太小。其他的都一般。。。。。。。。。
1		轻便,方便携带,性能也不错,能满足平时的工作需要,对出差人员来说非常不错

dev:
qid		label		text_a
0		1		這間酒店環境和服務態度亦算不錯,但房間空間太小~...

test:
qid		text_a
0		这个宾馆比较陈旧了,特价的房间也很一般。总体来说一般
...		...

NLPCC14-SC:

train:
label		text_a
1		请问这机不是有个遥控器的吗?
0		全是大道理啊

test:
qid		text_a
0		我终于找到同道中人啦~~~~从初中开始,我就...
...		...

从上可以看出两个数据集可以定义一致的读取方式,但是NLPCC14-SC没有dev数据集,因此不再定义dev数据

# 得到数据集字典
def open_func(file_path):
    return [line.strip() for line in open(file_path, 'r', encoding='utf8').readlines()[1:] if len(line.strip().split('\t')) >= 2]

data_dict = {'chnsenticorp': {'test': open_func('ChnSentiCorp/test.tsv'),
                              'dev': open_func('ChnSentiCorp/dev.tsv'),
                              'train': open_func('ChnSentiCorp/train.tsv')},
             'nlpcc14sc': {'test': open_func('NLPCC14-SC/test.tsv'),
                           'train': open_func('NLPCC14-SC/train.tsv')}}

1.2 定义数据读取器

# 定义数据集
from paddle.io import Dataset, DataLoader
from paddlenlp.data import Pad, Stack, Tuple
import numpy as np
label_list = [0, 1]

# 注意,由于token type在此项任务中并没有起作用,因此这里不再考虑,让模型自行填充。
class MyDataset(Dataset):
    def __init__(self, data, tokenizer, max_len=512, for_test=False):
        super().__init__()
        self._data = data
        self._tokenizer = tokenizer
        self._max_len = max_len
        self._for_test = for_test
    
    def __len__(self):
        return len(self._data)
    
    def __getitem__(self, idx):
        samples = self._data[idx].split('\t')
        label = samples[-2]
        text = samples[-1]
        label = int(label)
        text = self._tokenizer.encode(text, max_seq_len=self._max_len)['input_ids']
        if self._for_test:
            return np.array(text, dtype='int64')
        else:
            return np.array(text, dtype='int64'), np.array(label, dtype='int64')

def batchify_fn(for_test=False):
    if for_test:
        return lambda samples, fn=Pad(axis=0, pad_val=tokenizer.pad_token_id): np.row_stack([data for data in fn(samples)])
    else:
        return lambda samples, fn=Tuple(Pad(axis=0, pad_val=tokenizer.pad_token_id),
                                        Stack()): [data for data in fn(samples)]

def get_data_loader(data, tokenizer, batch_size=32, max_len=512, for_test=False):
    dataset = MyDataset(data, tokenizer, max_len, for_test)
    shuffle = True if not for_test else False
    data_loader = DataLoader(dataset=dataset, batch_size=batch_size, collate_fn=batchify_fn(for_test), shuffle=shuffle)
    return data_loader

1.3 模型搭建并进行训练

模型非常简单,我们只需要调用对应的序列分类工具就行了。为了方便训练,直接用高层API Model完成训练。

import paddle
from paddle.static import InputSpec

# 模型和分词
model = SkepForSequenceClassification.from_pretrained('skep_ernie_1.0_large_ch', num_classes=2)
tokenizer = SkepTokenizer.from_pretrained('skep_ernie_1.0_large_ch')

# model = ErnieForTokenClassification.from_pretrained('ernie-1.0', num_classes=2)
# tokenizer = ErnieTokenizer.from_pretrained('ernie-1.0')   

# model = BertForSequenceClassification.from_pretrained('bert-wwm-ext-chinese', num_classes=2)
# tokenizer = BertTokenizer.from_pretrained('bert-wwm-ext-chinese')
# 参数设置  # 更改此选项改变数据集   chnsenticorp,nlpcc14sc
# data_name = 'chnsenticorp' 
data_name = 'nlpcc14sc'

# 训练相关
epochs = 5
learning_rate = 2e-5        # chnsenticorp 2e-5  /nlpcc14sc 2e-5
batch_size = 48    #chnsenticorp 64 / nlpcc14sc 128
max_len = 224     # 92 / 96

## 数据相关
train_dataloader = get_data_loader(data_dict[data_name]['train'], tokenizer, batch_size, max_len, for_test=False)
if data_name == 'chnsenticorp':
    dev_dataloader = get_data_loader(data_dict[data_name]['dev'], tokenizer, batch_size, max_len, for_test=False)
else:
    dev_dataloader = None

input = InputSpec((-1, -1), dtype='int64', name='input')
label = InputSpec((-1, 2), dtype='int64', name='label')
model = paddle.Model(model, [input], [label])

# 模型准备
# 数据集chnsenticorp,增加正则化 5e-4
# 数据集nlpcc14sc,增加正则化 6e-4
optimizer = paddle.optimizer.Adam(learning_rate=learning_rate, parameters=model.parameters(),
            weight_decay=paddle.regularizer.L2Decay(5e-4))
model.prepare(optimizer, loss=paddle.nn.CrossEntropyLoss(), metrics=[paddle.metric.Accuracy()])
print(len(train_dataloader))

训练 chnsenticorp

# 开始训练 chnsenticorp
model.fit(train_dataloader, dev_dataloader, batch_size, epochs=12, save_freq=20,verbose=2, save_dir='./ckpt/chnsenticorp')

训练 nlpcc14sc

# 开始训练  nlpcc14sc
model.fit(train_dataloader, dev_dataloader, batch_size, epochs=8, save_freq=20,verbose=2, save_dir='./ckpt/nlpcc14sc')

1.4 预测并保存

import os
# 改变特别的数据集,预测相应的结果   chnsenticorp,nlpcc14sc
# data_name = 'chnsenticorp'
data_name = 'nlpcc14sc'

# 导入预训练模型
checkpoint_path = "./ckpt/" + data_name +  "/final"
model = SkepForSequenceClassification.from_pretrained('skep_ernie_1.0_large_ch', num_classes=2)
# model = ErnieForSequenceClassification.from_pretrained('ernie-1.0', num_classes=2)
# model = BertForSequenceClassification.from_pretrained('bert-wwm-ext-chinese', num_classes=2)
input = InputSpec((-1, -1), dtype='int64', name='input')
model = paddle.Model(model, input)
model.load(checkpoint_path)

# 导入测试集
test_dataloader = get_data_loader(data_dict[data_name]['test'], tokenizer, batch_size, max_len, for_test=True)
# 保存预测结果
save_dir = './submission'
save_file = {'chnsenticorp': 'ChnSentiCorp.tsv', 'nlpcc14sc': 'NLPCC14-SC.tsv'}
if not os.path.exists(save_dir):
    os.makedirs(save_dir)
predicts = []
for batch in test_dataloader:
    predict = model.predict_batch(batch)
    predicts += predict[0].argmax(axis=-1).tolist()

with open(os.path.join(save_dir,save_file[data_name]), 'w', encoding='utf8') as f:
    f.write("index\tprediction\n")
    for idx, sample in enumerate(data_dict[data_name]['test']):
        qid = sample.split('\t')[0]
        f.write(qid + '\t' + str(predicts[idx]) + '\n')
    f.close()

2. 目标级情感分析

目标级情感分析将对整句的情感倾向扩充为对多个特定属性的情感倾向,本质上仍然是序列分类,但是针对同一个序列需要进行多次分类,针对不同的属性。这里的思路是将针对的属性也作为输入的一部分传入模型,并预测情感倾向。

2.0 载入模型和Tokenizer

import paddlenlp
from paddlenlp.transformers import SkepForSequenceClassification, SkepTokenizer
# from paddlenlp.transformers import ErnieForSequenceClassification, ErnieTokenizer
# from paddlenlp.transformers import BertForSequenceClassification,BertTokenizer

2.1 数据处理

# 解压数据
!unzip -o datasets/SE-ABSA16_CAME
!unzip -o datasets/SE-ABSA16_PHNS
with open("SE-ABSA16_CAME/train.tsv", 'r',encoding="UTF-8") as f:
    lines = f.readlines()
    for line in lines[:5]:
        print(line)

数据内部结构解析(两个数据集的结构相同):

train:
label		text_a		text_b
1		phone#design_features		今天有幸拿到了港版白色iPhone 5真机,试玩了一下,说说感受吧:1. 真机尺寸宽度与4/4s保持一致没有变化...
0		software#operation_performance		苹果iPhone5新机到手 对比4S使用感受1,外观。一开始看发布会和网上照片,我和大多数人观点一样:变化不大,有点小失望。...

test:
qid		text_a		text_b
0		software#usability		刚刚入手8600,体会。刚刚从淘宝购买,1635元(包邮)。1、全新,...
...		...		...


```python
# 得到数据集字典
def open_func(file_path):
    return [line.strip() for line in open(file_path, 'r', encoding='utf8').readlines()[1:] if len(line.strip().split('\t')) >= 2]

data_dict = {'seabsa16phns': {'test': open_func('SE-ABSA16_PHNS/test.tsv'),
                              'train': open_func('SE-ABSA16_PHNS/train.tsv')},
             'seabsa16came': {'test': open_func('SE-ABSA16_CAME/test.tsv'),
                              'train': open_func('SE-ABSA16_CAME/train.tsv')}}

2.2 定义数据读取器

方法与1.2中相似,基本是完全粘贴复制过来即可。这里注意需要两个text,并且要考虑token_type_id了。

# 定义数据集
from paddle.io import Dataset, DataLoader
from paddlenlp.data import Pad, Stack, Tuple
import numpy as np
label_list = [0, 1]

# 考虑token_type_id
class MyDataset(Dataset):
    def __init__(self, data, tokenizer, max_len=512, for_test=False):
        super().__init__()
        self._data = data
        self._tokenizer = tokenizer
        self._max_len = max_len
        self._for_test = for_test
    
    def __len__(self):
        return len(self._data)
    
    def __getitem__(self, idx):
        samples = self._data[idx].split('\t')
        label = samples[-3]
        text_b = samples[-1]
        text_a = samples[-2]
        label = int(label)
        encoder_out = self._tokenizer.encode(text_a, text_b, max_seq_len=self._max_len)
        text = encoder_out['input_ids']
        token_type = encoder_out['token_type_ids']
        if self._for_test:
            return np.array(text, dtype='int64'), np.array(token_type, dtype='int64')
        else:
            return np.array(text, dtype='int64'), np.array(token_type, dtype='int64'), np.array(label, dtype='int64')

def batchify_fn(for_test=False):
    if for_test:
        return lambda samples, fn=Tuple(Pad(axis=0, pad_val=tokenizer.pad_token_id),
                                        Pad(axis=0, pad_val=tokenizer.pad_token_type_id)): [data for data in fn(samples)]
    else:
        return lambda samples, fn=Tuple(Pad(axis=0, pad_val=tokenizer.pad_token_id),
                                        Pad(axis=0, pad_val=tokenizer.pad_token_type_id),
                                        Stack()): [data for data in fn(samples)]


def get_data_loader(data, tokenizer, batch_size=32, max_len=512, for_test=False):
    dataset = MyDataset(data, tokenizer, max_len, for_test)
    shuffle = True if not for_test else False
    data_loader = DataLoader(dataset=dataset, batch_size=batch_size, collate_fn=batchify_fn(for_test), shuffle=shuffle)
    return data_loader

2.3 模型搭建并进行训练

把1.3的复制粘贴过来,注意该数据集名称,并加上一个token_type_id的输入。

import paddle
from paddle.static import InputSpec

# 模型和分词
# model = ErnieForSequenceClassification.from_pretrained('ernie-1.0', num_classes=2)
# tokenizer = ErnieTokenizer.from_pretrained('ernie-1.0')    # 0.5652

model = SkepForSequenceClassification.from_pretrained('skep_ernie_1.0_large_ch', num_classes=2)
tokenizer = SkepTokenizer.from_pretrained('skep_ernie_1.0_large_ch')

# model = BertForTokenClassification.from_pretrained('bert-wwm-ext-chinese', num_classes=2)
# tokenizer = BertTokenizer.from_pretrained('bert-wwm-ext-chinese')

# 参数设置,更改此选项改变数据集
# data_name = 'seabsa16phns'
data_name = 'seabsa16came'
## 训练相关
epochs = 20
learning_rate = 1e-5   # seabsa16phns 2e-5 /seabsa16came  2e-5
batch_size = 48  #  seabsa16phns 36 36/seabsa16came 32 36
max_len = 204   # seabsa16phns 114  124/  seabsa16came 128 96

# 数据相关
train_dataloader = get_data_loader(data_dict[data_name]['train'], tokenizer, batch_size, max_len, for_test=False)

input = InputSpec((-1, -1), dtype='int64', name='input')
token_type = InputSpec((-1, -1), dtype='int64', name='token_type')
label = InputSpec((-1, 2), dtype='int64', name='label')
model = paddle.Model(model, [input, token_type], [label])

# 模型准备
#数据集seabsa16phns  seabsa16came
# step_each_epoch = len(train_dataloader)
# lr = paddle.optimizer.lr.CosineAnnealingDecay(learning_rate=learning_rate,
#                                                   T_max=step_each_epoch * epochs)
# optimizer = paddle.optimizer.Adam(learning_rate=lr, parameters=model.parameters())

optimizer = paddle.optimizer.AdamW(weight_decay=0.01, learning_rate=learning_rate,parameters=model.parameters())
model.prepare(optimizer, loss=paddle.nn.CrossEntropyLoss(), metrics=[paddle.metric.Accuracy()])

训练seabsa16phns

# 开始训练
model.fit(train_dataloader, batch_size=batch_size, epochs=epochs, save_freq=epochs, verbose=2,save_dir='./ckpt/seabsa16phns')

训练seabsa16came

# 开始训练
model.fit(train_dataloader, batch_size=batch_size, epochs=20, save_freq=20, verbose=2,save_dir='./ckpt/seabsa16came')

2.4 预测并保存

# 导入预训练模型
checkpoint_path = "./ckpt/" + data_name +  "/final"
model = SkepForSequenceClassification.from_pretrained('skep_ernie_1.0_large_ch', num_classes=2)
# model = ErnieForSequenceClassification.from_pretrained('ernie-1.0', num_classes=2)

input = InputSpec((-1, -1), dtype='int64', name='input')
token_type = InputSpec((-1, -1), dtype='int64', name='token_type')
model = paddle.Model(model, [input, token_type])
model.load(checkpoint_path)

# 导入测试集
test_dataloader = get_data_loader(data_dict[data_name]['test'], tokenizer, batch_size, max_len, for_test=True)
# 预测保存
save_file = {'seabsa16phns': './submission/SE-ABSA16_PHNS.tsv', 'seabsa16came': './submission/SE-ABSA16_CAME.tsv'}
predicts = []
for batch in test_dataloader:
    predict = model.predict_batch(batch)
    predicts += predict[0].argmax(axis=-1).tolist()

with open(save_file[data_name], 'w', encoding='utf8') as f:
    f.write("index\tprediction\n")
    for idx, sample in enumerate(data_dict[data_name]['test']):
        qid = sample.split('\t')[0]
        f.write(qid + '\t' + str(predicts[idx]) + '\n')
    f.close()

3. 观点抽取

信息抽取旨在从非结构化自然语言文本中提取结构化知识,如实体、关系、事件等。

3.0 载入模型和Tokenizer

import paddlenlp
from paddlenlp.transformers import SkepForTokenClassification, SkepTokenizer
# from paddlenlp.transformers import ErnieForTokenClassification, ErnieTokenizer
# from paddlenlp.transformers import BertForTokenClassification, BertTokenizer

3.1 数据处理

# 解压数据
!unzip -o datasets/COTE-BD
!unzip -o datasets/COTE-DP
!unzip -o datasets/COTE-MFW
with open("COTE-DP/train.tsv", 'r',encoding="UTF-8") as f:
    lines = f.readlines()
    print('数据量大小:',len(lines))
    for line in lines[:3]:
        print(line)
with open("COTE-BD/train.tsv", 'r',encoding="UTF-8") as f:
    lines = f.readlines()
    print('数据量大小:',len(lines))
    for line in lines[:3]:
        print(line)
with open("COTE-MFW/train.tsv", 'r',encoding="UTF-8") as f:
    lines = f.readlines()
    print('数据量大小:',len(lines))
    for line in lines[:3]:
        print(line)

数据内部结构解析(三个数据集的结构相同):

train:
label		text_a
鸟人		《鸟人》一书以鸟博士的遭遇作为主线,主要写了鸟博士从校园出来后的种种荒诞经历。
...		...
test:
qid		text_a
0		毕棚沟的风景早有所闻,尤其以秋季的风景最美,但是这次去晚了,红叶全掉完了,黄叶也看不到了,下了雪只...
...		...
```python
# 得到数据集字典
def open_func(file_path):
    return [line.strip() for line in open(file_path, 'r', encoding='utf8').readlines()[1:] if len(line.strip().split('\t')) >= 2]

data_dict = {'cotebd': {'test': open_func('COTE-BD/test.tsv'),
                        'train': open_func('COTE-BD/train.tsv')},
             'cotedp': {'test': open_func('COTE-DP/test.tsv'),
                        'train': open_func('COTE-DP/train.tsv')},
             'cotemfw': {'test': open_func('COTE-MFW/test.tsv'),
                        'train': open_func('COTE-MFW/train.tsv')}}

3.2 定义数据读取器

思路类似,需要注意的是这一次是Tokens级的分类。在数据读取器中,将label写成BIO的形式,每一个token都对应一个label。

# 定义数据集
from paddle.io import Dataset, DataLoader
from paddlenlp.data import Pad, Stack, Tuple
import numpy as np
label_list = {'B': 0, 'I': 1, 'O': 2}
index2label = {0: 'B', 1: 'I', 2: 'O'}

# 考虑token_type_id
class MyDataset(Dataset):
    def __init__(self, data, tokenizer, max_len=512, for_test=False):
        super().__init__()
        self._data = data
        self._tokenizer = tokenizer
        self._max_len = max_len
        self._for_test = for_test
    
    def __len__(self):
        return len(self._data)
    
    def __getitem__(self, idx):
        samples = self._data[idx].split('\t')
        label = samples[-2]
        text = samples[-1]
        if self._for_test:
            origin_enc = self._tokenizer.encode(text, max_seq_len=self._max_len)['input_ids']
            return np.array(origin_enc, dtype='int64')
        else:
            
            # 由于并不是每个字都是一个token,这里采用一种简单的处理方法,先编码label,再编码text中除了label以外的词,最后合到一起
            texts = text.split(label)
            label_enc = self._tokenizer.encode(label)['input_ids']
            cls_enc = label_enc[0]
            sep_enc = label_enc[-1]
            label_enc = label_enc[1:-1]
            
            # 合并
            origin_enc = []
            label_ids = []
            for index, text in enumerate(texts):
                text_enc = self._tokenizer.encode(text)['input_ids']
                text_enc = text_enc[1:-1]
                origin_enc += text_enc
                label_ids += [label_list['O']] * len(text_enc)
                if index != len(texts) - 1:
                    origin_enc += label_enc
                    label_ids += [label_list['B']] + [label_list['I']] * (len(label_enc) - 1)

            origin_enc = [cls_enc] + origin_enc + [sep_enc]
            label_ids = [label_list['O']] + label_ids + [label_list['O']]
            
            # 截断
            if len(origin_enc) > self._max_len:
                origin_enc = origin_enc[:self._max_len-1] + origin_enc[-1:]
                label_ids = label_ids[:self._max_len-1] + label_ids[-1:]
            return np.array(origin_enc, dtype='int64'), np.array(label_ids, dtype='int64')


def batchify_fn(for_test=False):
    if for_test:
        return lambda samples, fn=Pad(axis=0, pad_val=tokenizer.pad_token_id): np.row_stack([data for data in fn(samples)])
    else:
        return lambda samples, fn=Tuple(Pad(axis=0, pad_val=tokenizer.pad_token_id),
                                        Pad(axis=0, pad_val=label_list['O'])): [data for data in fn(samples)]


def get_data_loader(data, tokenizer, batch_size=32, max_len=512, for_test=False):
    dataset = MyDataset(data, tokenizer, max_len, for_test)
    shuffle = True if not for_test else False
    data_loader = DataLoader(dataset=dataset, batch_size=batch_size, collate_fn=batchify_fn(for_test), shuffle=shuffle)
    return data_loader

3.3 模型搭建并进行训练

与之前不同的是模型换成了Token分类。由于Accuracy不再适用于Token分类,我们用Perplexity来大致衡量预测的准确度(接近1为最佳)。

import paddle
from paddle.static import InputSpec
from paddlenlp.metrics import Perplexity

# 模型和分词 
model = SkepForTokenClassification.from_pretrained('skep_ernie_1.0_large_ch', num_classes=3)
tokenizer = SkepTokenizer.from_pretrained('skep_ernie_1.0_large_ch')

# model = ErnieForTokenClassification.from_pretrained('ernie-1.0', num_classes=3)
# tokenizer = ErnieTokenizer.from_pretrained('ernie-1.0')   

# model = BertForTokenClassification.from_pretrained('bert-wwm-ext-chinese', num_classes=3)
# tokenizer = BertTokenizer.from_pretrained('bert-wwm-ext-chinese')

# 参数设置,更改此选项改变数据集
# data_name = 'cotedp'
# data_name = 'cotebd'
data_name = 'cotemfw'
# 训练相关
epochs = 10        # cotedp    /  cotebd10 16/
learning_rate = 2e-5     #2e-5 / 2e-5 /4e-5
batch_size = 56  # codedp 128 156 /codebd 128  156   /codemfw 156  196
max_len = 196      # 96 144 /  96  /  96

## 数据相关
train_dataloader = get_data_loader(data_dict[data_name]['train'], tokenizer, batch_size, max_len, for_test=False)

input = InputSpec((-1, -1), dtype='int64', name='input')
label = InputSpec((-1, -1, 3), dtype='int64', name='label')
model = paddle.Model(model, [input], [label])

# 模型准备    # 1 2e-5
# step_each_epoch = len(train_dataloader)
# lr = paddle.optimizer.lr.CosineAnnealingDecay(learning_rate=learning_rate,
#                                                   T_max=step_each_epoch * epochs)
# optimizer = paddle.optimizer.Adam(learning_rate=lr, parameters=model.parameters()
# ,weight_decay=paddle.regularizer.L2Decay(3e-5))    # weight_decay=paddle.regularizer.L2Decay(3e-4)

# optimizer = paddle.optimizer.Adam(learning_rate=learning_rate, parameters=model.parameters(),weight_decay=paddle.regularizer.L2Decay(3e-4))

optimizer = paddle.optimizer.AdamW(learning_rate=learning_rate,parameters=model.parameters(),weight_decay=0.01)
model.prepare(optimizer, loss=paddle.nn.CrossEntropyLoss(), metrics=[Perplexity()]) 

训练codedp

# 开始训练 cotedp
model.fit(train_dataloader, batch_size=batch_size, epochs=epochs, save_freq=epochs, save_dir='./ckpt/cotedp')

训练cotebd

# 开始训练 cotebd
model.fit(train_dataloader, batch_size=batch_size, epochs=epochs, save_freq=epochs, save_dir='./ckpt/cotebd')

训练codefw

# 开始训练 cotemfw
model.fit(train_dataloader, batch_size=batch_size, epochs=epochs, save_freq=epochs, save_dir='./ckpt/cotemfw')

3.4 预测并保存

# 参数设置,更改此选项改变数据集   cotedp cotebd cotemfw
# data_name = 'cotedp'
# data_name = 'cotebd'
data_name = 'cotemfw'
# 导入预训练模型
checkpoint_path = "./ckpt/" + data_name +  "/final"  # 填写预训练模型的保存路径

model = SkepForTokenClassification.from_pretrained('skep_ernie_1.0_large_ch', num_classes=3)
# model = ErnieForTokenClassification.from_pretrained('ernie-1.0', num_classes=3)

input = InputSpec((-1, -1), dtype='int64', name='input')
model = paddle.Model(model, [input])
model.load(checkpoint_path)

# 导入测试集
test_dataloader = get_data_loader(data_dict[data_name]['test'], tokenizer, batch_size, max_len, for_test=True)
# 保存结果
save_file = {'cotebd': './submission/COTE_BD.tsv', 'cotedp': './submission/COTE_DP.tsv', 'cotemfw': './submission/COTE_MFW.tsv'}
predicts = []
input_ids = []
for batch in test_dataloader:
    predict = model.predict_batch(batch)
    predicts += predict[0].argmax(axis=-1).tolist()
    input_ids += batch.numpy().tolist()

# 先找到B所在的位置,即标号为0的位置,然后顺着该位置一直找到所有的I,即标号为1,即为所得。
def find_entity(prediction, input_ids):
    entity = []
    entity_ids = []
    for index, idx in enumerate(prediction):
        if idx == label_list['B']:
            entity_ids = [input_ids[index]]
        elif idx == label_list['I']:
            if entity_ids:
                entity_ids.append(input_ids[index])
        elif idx == label_list['O']:
            if entity_ids:
                entity.append(''.join(tokenizer.convert_ids_to_tokens(entity_ids)))
                entity_ids = []
    return entity

with open(save_file[data_name], 'w', encoding='utf8') as f:
    f.write("index\tprediction\n")
    for idx, sample in enumerate(data_dict[data_name]['test']):
        qid = sample.split('\t')[0]
        entity = find_entity(predicts[idx], input_ids[idx])
        f.write(qid + '\t' + '\x01'.join(entity) + '\n')
    f.close()

将预测文件结果压缩至zip文件,提交千言比赛网站

NOTE: results文件夹中NLPCC14-SC.tsv、SE-ABSA16_CAME.tsv、COTE_BD.tsv、COTE_MFW.tsv、COTE_DP.tsv等文件是为了顺利提交,补齐的文件。
其结果还有待提高。

#将预测文件结果压缩至zip文件,提交
t']):
        qid = sample.split('\t')[0]
        entity = find_entity(predicts[idx], input_ids[idx])
        f.write(qid + '\t' + '\x01'.join(entity) + '\n')
    f.close()

将预测文件结果压缩至zip文件,提交千言比赛网站

NOTE: results文件夹中NLPCC14-SC.tsv、SE-ABSA16_CAME.tsv、COTE_BD.tsv、COTE_MFW.tsv、COTE_DP.tsv等文件是为了顺利提交,补齐的文件。
其结果还有待提高。

#将预测文件结果压缩至zip文件,提交
!zip -r submission.zip submission

参考文献&课程

1)百度飞桨-基于深度学习的自然语言处理
2) 『NLP直播课』Day 5:情感分析预训练模型SKEP
3) 基于PaddleNLP的Skep预训练模型实现千言数据集——情感分析比赛

我在AI Studio上获得黄金等级,点亮6个徽章,来互关呀~ https://aistudio.baidu.com/aistudio/personalcenter/thirdview/616597

  • 2
    点赞
  • 20
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

黄波波19

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值