『NLP经典项目集』03:利用情感分析选择年夜饭

快来选一顿好吃的年夜饭:看看如何自定义数据集,实现文本分类中的情感分析任务
情感分析是自然语言处理领域一个老生常谈的任务。句子情感分析目的是为了判别说者的情感倾向,比如在某些话题上给出的的态度明确的观点,或者反映的情绪状态等。情感分析有着广泛应用,比如电商评论分析、舆情分析等。


环境介绍
PaddlePaddle框架,AI Studio平台已经默认安装最新版2.0。

PaddleNLP,深度兼容框架2.0,是飞桨框架2.0在NLP领域的最佳实践。

这里使用的是beta版本,马上也会发布rc版哦。AI Studio平台后续会默认安装PaddleNLP,在此之前可使用如下命令安装。

记得给PaddleNLP点个小小的Star⭐

开源不易,希望大家多多支持~

GitHub地址:https://github.com/PaddlePaddle/PaddleNLP 

In [1]
# 下载paddlenlp
!pip install --upgrade paddlenlp -i https://pypi.org/simple
Requirement already satisfied: paddlenlp in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (2.0.0rc18)
Requirement already satisfied: h5py in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (2.9.0)
Requirement already satisfied: jieba in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (0.42.1)
Requirement already satisfied: visualdl in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (2.1.1)
Requirement already satisfied: colorama in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (0.4.4)
Requirement already satisfied: colorlog in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (4.1.0)
Requirement already satisfied: seqeval in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (1.2.2)
Requirement already satisfied: numpy>=1.7 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from h5py->paddlenlp) (1.20.2)
Requirement already satisfied: six in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from h5py->paddlenlp) (1.15.0)
Requirement already satisfied: scikit-learn>=0.21.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from seqeval->paddlenlp) (0.24.1)
Requirement already satisfied: scipy>=0.19.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-learn>=0.21.3->seqeval->paddlenlp) (1.6.2)
Requirement already satisfied: threadpoolctl>=2.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-learn>=0.21.3->seqeval->paddlenlp) (2.1.0)
Requirement already satisfied: joblib>=0.11 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-learn>=0.21.3->seqeval->paddlenlp) (0.14.1)
Requirement already satisfied: protobuf>=3.11.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl->paddlenlp) (3.14.0)
Requirement already satisfied: shellcheck-py in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl->paddlenlp) (0.7.1.1)
Requirement already satisfied: flake8>=3.7.9 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl->paddlenlp) (3.8.2)
Requirement already satisfied: flask>=1.1.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl->paddlenlp) (1.1.1)
Requirement already satisfied: pre-commit in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl->paddlenlp) (1.21.0)
Requirement already satisfied: Pillow>=7.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl->paddlenlp) (7.1.2)
Requirement already satisfied: bce-python-sdk in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl->paddlenlp) (0.8.53)
Requirement already satisfied: Flask-Babel>=1.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl->paddlenlp) (1.0.0)
Requirement already satisfied: requests in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl->paddlenlp) (2.22.0)
Requirement already satisfied: mccabe<0.7.0,>=0.6.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flake8>=3.7.9->visualdl->paddlenlp) (0.6.1)
Requirement already satisfied: pyflakes<2.3.0,>=2.2.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flake8>=3.7.9->visualdl->paddlenlp) (2.2.0)
Requirement already satisfied: pycodestyle<2.7.0,>=2.6.0a1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flake8>=3.7.9->visualdl->paddlenlp) (2.6.0)
Requirement already satisfied: importlib-metadata in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flake8>=3.7.9->visualdl->paddlenlp) (0.23)
Requirement already satisfied: Werkzeug>=0.15 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flask>=1.1.1->visualdl->paddlenlp) (0.16.0)
Requirement already satisfied: itsdangerous>=0.24 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flask>=1.1.1->visualdl->paddlenlp) (1.1.0)
Requirement already satisfied: click>=5.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flask>=1.1.1->visualdl->paddlenlp) (7.0)
Requirement already satisfied: Jinja2>=2.10.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flask>=1.1.1->visualdl->paddlenlp) (2.10.1)
Requirement already satisfied: Babel>=2.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from Flask-Babel>=1.0.0->visualdl->paddlenlp) (2.8.0)
Requirement already satisfied: pytz in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from Flask-Babel>=1.0.0->visualdl->paddlenlp) (2019.3)
Requirement already satisfied: MarkupSafe>=0.23 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from Jinja2>=2.10.1->flask>=1.1.1->visualdl->paddlenlp) (1.1.1)
Requirement already satisfied: pycryptodome>=3.8.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from bce-python-sdk->visualdl->paddlenlp) (3.9.9)
Requirement already satisfied: future>=0.6.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from bce-python-sdk->visualdl->paddlenlp) (0.18.0)
Requirement already satisfied: zipp>=0.5 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from importlib-metadata->flake8>=3.7.9->visualdl->paddlenlp) (0.6.0)
Requirement already satisfied: more-itertools in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from zipp>=0.5->importlib-metadata->flake8>=3.7.9->visualdl->paddlenlp) (7.2.0)
Requirement already satisfied: nodeenv>=0.11.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl->paddlenlp) (1.3.4)
Requirement already satisfied: toml in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl->paddlenlp) (0.10.0)
Requirement already satisfied: virtualenv>=15.2 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl->paddlenlp) (16.7.9)
Requirement already satisfied: aspy.yaml in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl->paddlenlp) (1.3.0)
Requirement already satisfied: cfgv>=2.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl->paddlenlp) (2.0.1)
Requirement already satisfied: identify>=1.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl->paddlenlp) (1.4.10)
Requirement already satisfied: pyyaml in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl->paddlenlp) (5.1.2)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from requests->visualdl->paddlenlp) (1.25.6)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from requests->visualdl->paddlenlp) (2019.9.11)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from requests->visualdl->paddlenlp) (3.0.4)
Requirement already satisfied: idna<2.9,>=2.5 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from requests->visualdl->paddlenlp) (2.8)
WARNING: You are using pip version 21.0.1; however, version 21.1 is available.
You should consider upgrading via the '/opt/conda/envs/python35-paddle120-env/bin/python -m pip install --upgrade pip' command.
查看安装的版本

In [2]
import paddle
import paddlenlp

print(paddle.__version__, paddlenlp.__version__)
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/utils.py:26: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  def convert_to_list(value, n, name, dtype=np.int):
2.0.2 2.0.0rc18
PaddleNLP和Paddle框架是什么关系?


 


Paddle框架是基础底座,提供深度学习任务全流程API。PaddleNLP基于Paddle框架开发,适用于NLP任务。
PaddleNLP中数据处理、数据集、组网单元等API未来会沉淀到框架paddle.text中。

代码中继承 class TSVDataset(paddle.io.Dataset)
使用飞桨完成深度学习任务的通用流程
数据集和数据处理
paddle.io.Dataset
paddle.io.DataLoader
paddlenlp.data

组网和网络配置

paddle.nn.Embedding
paddlenlp.seq2vec paddle.nn.Linear
paddle.tanh

paddle.nn.CrossEntropyLoss paddle.metric.Accuracy paddle.optimizer

model.prepare

网络训练和评估
model.fit
model.evaluate

预测 model.predict

In [3]
import numpy as np
from functools import partial

import paddle.nn as nn
import paddle.nn.functional as F
import paddlenlp as ppnlp
from paddlenlp.data import Pad, Stack, Tuple
from paddlenlp.datasets import MapDataset

from utils import load_vocab, convert_example
数据集和数据处理
自定义数据集
映射式(map-style)数据集需要继承paddle.io.Dataset

__getitem__: 根据给定索引获取数据集中指定样本,在 paddle.io.DataLoader 中需要使用此函数通过下标获取样本。

__len__: 返回数据集样本个数, paddle.io.BatchSampler 中需要样本个数生成下标序列。

In [4]
from paddlenlp.datasets import load_dataset

def read(data_path):
    with open(data_path, 'r', encoding='utf-8') as f:
        for line in f:
            l = line.strip('\n').split('\t')
            if len(l) != 2:
                print (len(l), line)
            words, labels = line.strip('\n').split('\t')
            # words = words.split('\002')
            # labels = labels.split('\002')
            yield {'tokens': words, 'labels': labels}

# data_path为read()方法的参数
train_ds = load_dataset(read, data_path='train.txt',lazy=False)
dev_ds = load_dataset(read, data_path='dev.txt',lazy=True)
test_ds = load_dataset(read, data_path='test.txt',lazy=True)
看看数据长什么样

In [5]
for i in range(10):
    print (train_ds[i])
{'tokens': '赢在心理,输在出品!杨枝太酸,三文鱼熟了,酥皮焗杏汁杂果可以换个名(九唔搭八)', 'labels': '0'}
{'tokens': '服务一般,客人多,服务员少,但食品很不错', 'labels': '1'}
{'tokens': '東坡肉竟然有好多毛,問佢地點解,佢地仲話係咁架\ue107\ue107\ue107\ue107\ue107\ue107\ue107冇天理,第一次食東坡肉有毛,波羅包就幾好食', 'labels': '0'}
{'tokens': '父亲节去的,人很多,口味还可以上菜快!但是结账的时候,算错了没有打折,我也忘记拿清单了。说好打8折的,收银员没有打,人太多一时自己也没有想起。不知道收银员忘记,还是故意那钱露入自己钱包。。', 'labels': '0'}
{'tokens': '吃野味,吃个新鲜,你当然一定要来广州吃鹿肉啦*价格便宜,量好足,', 'labels': '1'}
{'tokens': '味道几好服务都五错推荐鹅肝乳鸽飞鱼', 'labels': '1'}
{'tokens': '作为老字号,水准保持算是不错,龟岗分店可能是位置问题,人不算多,基本不用等位,自从抢了券,去过好几次了,每次都可以打85以上的评分,算是可以了~粉丝煲每次必点,哈哈,鱼也不错,还会来帮衬的,楼下还可以免费停车!', 'labels': '1'}
{'tokens': '边到正宗啊?味味都咸死人啦,粤菜讲求鲜甜,五知点解感多人话好吃。', 'labels': '0'}
{'tokens': '环境卫生差,出品垃圾,冇下次,不知所为', 'labels': '0'}
{'tokens': '和苑真是精致粤菜第一家,服务菜品都一流', 'labels': '1'}
数据处理
为了将原始数据处理成模型可以读入的格式,本项目将对数据作以下处理:

首先使用jieba切词,之后将jieba切完后的单词映射词表中单词id。


使用paddle.io.DataLoader接口多线程异步加载数据。
其中用到了PaddleNLP中关于数据处理的API。PaddleNLP提供了许多关于NLP任务中构建有效的数据pipeline的常用API

API	简介
paddlenlp.data.Stack	堆叠N个具有相同shape的输入数据来构建一个batch,它的输入必须具有相同的shape,输出便是这些输入的堆叠组成的batch数据。
paddlenlp.data.Pad	堆叠N个输入数据来构建一个batch,每个输入数据将会被padding到N个输入数据中最大的长度
paddlenlp.data.Tuple	将多个组batch的函数包装在一起
更多数据处理操作详见: https://github.com/PaddlePaddle/PaddleNLP/blob/develop/docs/data.md

In [6]
# 下载词汇表文件word_dict.txt,用于构造词-id映射关系。
!wget https://paddlenlp.bj.bcebos.com/data/senta_word_dict.txt

# 加载词表
vocab = load_vocab('./senta_word_dict.txt')

for k, v in vocab.items():
    print(k, v)
    break
--2021-04-27 16:53:50--  https://paddlenlp.bj.bcebos.com/data/senta_word_dict.txt
Resolving paddlenlp.bj.bcebos.com (paddlenlp.bj.bcebos.com)... 182.61.200.195, 182.61.200.229, 2409:8c00:6c21:10ad:0:ff:b00e:67d
Connecting to paddlenlp.bj.bcebos.com (paddlenlp.bj.bcebos.com)|182.61.200.195|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 14600150 (14M) [text/plain]
Saving to: ‘senta_word_dict.txt.9’

senta_word_dict.txt 100%[===================>]  13.92M  33.8MB/s    in 0.4s    

2021-04-27 16:53:50 (33.8 MB/s) - ‘senta_word_dict.txt.9’ saved [14600150/14600150]

[PAD] 0
构造dataloder
下面的create_data_loader函数用于创建运行和预测时所需要的DataLoader对象。

paddle.io.DataLoader返回一个迭代器,该迭代器根据batch_sampler指定的顺序迭代返回dataset数据。异步加载数据。

batch_sampler:DataLoader通过 batch_sampler 产生的mini-batch索引列表来 dataset 中索引样本并组成mini-batch

collate_fn:指定如何将样本列表组合为mini-batch数据。传给它参数需要是一个callable对象,需要实现对组建的batch的处理逻辑,并返回每个batch的数据。在这里传入的是prepare_input函数,对产生的数据进行pad操作,并返回实际长度等。

In [7]
# Reads data and generates mini-batches.
def create_dataloader(dataset,
                      trans_function=None,
                      mode='train',
                      batch_size=1,
                      pad_token_id=0,
                      batchify_fn=None):
    if trans_function:
        dataset_map = dataset.map(trans_function)

    # return_list 数据是否以list形式返回
    # collate_fn  指定如何将样本列表组合为mini-batch数据。传给它参数需要是一个callable对象,需要实现对组建的batch的处理逻辑,并返回每个batch的数据。在这里传入的是`prepare_input`函数,对产生的数据进行pad操作,并返回实际长度等。
    dataloader = paddle.io.DataLoader(
        dataset_map,
        return_list=True,
        batch_size=batch_size,
        collate_fn=batchify_fn)
        
    return dataloader

# python中的偏函数partial,把一个函数的某些参数固定住(也就是设置默认值),返回一个新的函数,调用这个新函数会更简单。
trans_function = partial(
    convert_example,
    vocab=vocab,
    unk_token_id=vocab.get('[UNK]', 1),
    is_test=False)

# 将读入的数据batch化处理,便于模型batch化运算。
# batch中的每个句子将会padding到这个batch中的文本最大长度batch_max_seq_len。
# 当文本长度大于batch_max_seq时,将会截断到batch_max_seq_len;当文本长度小于batch_max_seq时,将会padding补齐到batch_max_seq_len.
batchify_fn = lambda samples, fn=Tuple(
    Pad(axis=0, pad_val=vocab['[PAD]']),  # input_ids
    Stack(dtype="int64"),  # seq len
    Stack(dtype="int64")  # label
): [data for data in fn(samples)]


train_loader = create_dataloader(
    train_ds,
    trans_function=trans_function,
    batch_size=128,
    mode='train',
    batchify_fn=batchify_fn)
dev_loader = create_dataloader(
    dev_ds,
    trans_function=trans_function,
    batch_size=128,
    mode='validation',
    batchify_fn=batchify_fn)
test_loader = create_dataloader(
    test_ds,
    trans_function=trans_function,
    batch_size=128,
    mode='test',
    batchify_fn=batchify_fn)

for i in train_loader:
    print(i)
    break
Building prefix dict from the default dictionary ...
2021-04-27 16:53:52,319 - DEBUG - Building prefix dict from the default dictionary ...
Loading model from cache /tmp/jieba.cache
2021-04-27 16:53:52,322 - DEBUG - Loading model from cache /tmp/jieba.cache
Loading model cost 0.690 seconds.
2021-04-27 16:53:53,011 - DEBUG - Loading model cost 0.690 seconds.
Prefix dict has been built successfully.
2021-04-27 16:53:53,013 - DEBUG - Prefix dict has been built successfully.
[Tensor(shape=[128, 434], dtype=int64, place=CUDAPinnedPlace, stop_gradient=True,
       [[656582 , 666970 , 646434 , ..., 0      , 0      , 0      ],
        [724601 , 1250380, 1106339, ..., 0      , 0      , 0      ],
        [1      , 1232128, 389886 , ..., 0      , 0      , 0      ],
        ...,
        [653811 , 1225884, 952595 , ..., 0      , 0      , 0      ],
        [137984 , 261577 , 850865 , ..., 0      , 0      , 0      ],
        [115700 , 364716 , 509081 , ..., 0      , 0      , 0      ]]), Tensor(shape=[128], dtype=int64, place=CUDAPinnedPlace, stop_gradient=True,
       [27 , 13 , 40 , 60 , 22 , 11 , 67 , 20 , 12 , 11 , 10 , 85 , 11 , 15 , 261, 8  , 81 , 10 , 13 , 96 , 32 , 169, 20 , 16 , 25 , 29 , 26 , 107, 12 , 65 , 20 , 20 , 22 , 74 , 124, 207, 16 , 13 , 31 , 205, 28 , 12 , 46 , 68 , 434, 10 , 12 , 61 , 17 , 14 , 19 , 34 , 141, 44 , 135, 293, 39 , 10 , 22 , 41 , 14 , 16 , 185, 24 , 8  , 42 , 36 , 19 , 13 , 9  , 29 , 12 , 61 , 151, 11 , 11 , 20 , 16 , 24 , 104, 12 , 9  , 11 , 72 , 48 , 16 , 317, 57 , 91 , 11 , 278, 37 , 97 , 11 , 12 , 12 , 25 , 58 , 27 , 12 , 205, 23 , 22 , 17 , 10 , 107, 40 , 34 , 9  , 59 , 9  , 26 , 25 , 93 , 14 , 13 , 14 , 20 , 21 , 25 , 13 , 32 , 11 , 24 , 16 , 117, 43 , 11 ]), Tensor(shape=[128], dtype=int64, place=CUDAPinnedPlace, stop_gradient=True,
       [0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1])]
模型搭建
使用LSTMencoder搭建一个BiLSTM模型用于进行句子建模,得到句子的向量表示。

然后接一个线性变换层,完成二分类任务。

paddle.nn.Embedding组建word-embedding层
ppnlp.seq2vec.LSTMEncoder组建句子建模层
paddle.nn.Linear构造二分类器


图1:seq2vec示意图

除LSTM外,seq2vec还提供了许多语义表征方法,详细可参考:seq2vec介绍
In [8]
class LSTMModel(nn.Layer):
    def __init__(self,
                 vocab_size,
                 num_classes,
                 emb_dim=128,
                 padding_idx=0,
                 lstm_hidden_size=198,
                 direction='forward',
                 lstm_layers=1,
                 dropout_rate=0,
                 pooling_type=None,
                 fc_hidden_size=96):
        super().__init__()

        # 首先将输入word id 查表后映射成 word embedding
        self.embedder = nn.Embedding(
            num_embeddings=vocab_size,
            embedding_dim=emb_dim,
            padding_idx=padding_idx)

        # 将word embedding经过LSTMEncoder变换到文本语义表征空间中
        self.lstm_encoder = ppnlp.seq2vec.LSTMEncoder(
            emb_dim,
            lstm_hidden_size,
            num_layers=lstm_layers,
            direction=direction,
            dropout=dropout_rate,
            pooling_type=pooling_type)

        # LSTMEncoder.get_output_dim()方法可以获取经过encoder之后的文本表示hidden_size
        self.fc = nn.Linear(self.lstm_encoder.get_output_dim(), fc_hidden_size)

        # 最后的分类器
        self.output_layer = nn.Linear(fc_hidden_size, num_classes)

    def forward(self, text, seq_len):
        # text shape: (batch_size, num_tokens)
        # print('input :', text.shape)
        
        # Shape: (batch_size, num_tokens, embedding_dim)
        embedded_text = self.embedder(text)
        # print('after word-embeding:', embedded_text.shape)

        # Shape: (batch_size, num_tokens, num_directions*lstm_hidden_size)
        # num_directions = 2 if direction is 'bidirectional' else 1
        text_repr = self.lstm_encoder(embedded_text, sequence_length=seq_len)
        # print('after lstm:', text_repr.shape)


        # Shape: (batch_size, fc_hidden_size)
        fc_out = paddle.tanh(self.fc(text_repr))
        # print('after Linear classifier:', fc_out.shape)

        # Shape: (batch_size, num_classes)
        logits = self.output_layer(fc_out)
        # print('output:', logits.shape)
        
        # probs 分类概率值
        probs = F.softmax(logits, axis=-1)
        # print('output probability:', probs.shape)
        return probs

model= LSTMModel(
        len(vocab),
        2,
        direction='bidirectional',
        padding_idx=vocab['[PAD]'])
model = paddle.Model(model)
模型配置和训练
模型配置
In [9]
optimizer = paddle.optimizer.Adam(
        parameters=model.parameters(), learning_rate=5e-5)

loss = paddle.nn.CrossEntropyLoss()
metric = paddle.metric.Accuracy()

model.prepare(optimizer, loss, metric)
In [10]
# 设置visualdl路径
log_dir = './visualdl'
callback = paddle.callbacks.VisualDL(log_dir=log_dir)
模型训练
训练过程中会输出loss、acc等信息。这里设置了10个epoch,在训练集上准确率约97%。



In [11]
model.fit(train_loader, dev_loader, epochs=10, save_dir='./checkpoints', save_freq=5, callbacks=callback)
The loss value printed in the log is the current step, and the metric is the average value of previous step.
Epoch 1/10
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/utils.py:77: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  return (isinstance(seq, collections.Sequence) and
step  10/125 - loss: 0.7000 - acc: 0.4813 - 119ms/step
step  20/125 - loss: 0.6922 - acc: 0.4957 - 101ms/step
step  30/125 - loss: 0.6930 - acc: 0.4846 - 95ms/step
step  40/125 - loss: 0.6921 - acc: 0.5066 - 92ms/step
step  50/125 - loss: 0.6898 - acc: 0.5112 - 91ms/step
step  60/125 - loss: 0.6933 - acc: 0.5109 - 89ms/step
step  70/125 - loss: 0.6925 - acc: 0.5119 - 88ms/step
step  80/125 - loss: 0.6887 - acc: 0.5105 - 87ms/step
step  90/125 - loss: 0.6916 - acc: 0.5104 - 87ms/step
step 100/125 - loss: 0.6871 - acc: 0.5117 - 87ms/step
step 110/125 - loss: 0.6834 - acc: 0.5115 - 87ms/step
step 120/125 - loss: 0.6847 - acc: 0.5117 - 87ms/step
step 125/125 - loss: 0.6829 - acc: 0.5153 - 85ms/step
save checkpoint at /home/aistudio/checkpoints/0
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step  10 - loss: 0.6849 - acc: 0.6859 - 85ms/step
step  20 - loss: 0.6827 - acc: 0.6859 - 69ms/step
step  30 - loss: 0.6824 - acc: 0.6810 - 65ms/step
step  40 - loss: 0.6819 - acc: 0.6787 - 63ms/step
step  50 - loss: 0.6842 - acc: 0.6791 - 62ms/step
step  60 - loss: 0.6788 - acc: 0.6813 - 61ms/step
step  70 - loss: 0.6838 - acc: 0.6783 - 60ms/step
step  80 - loss: 0.6825 - acc: 0.6778 - 59ms/step
Epoch 2/10
step  10/125 - loss: 0.6819 - acc: 0.7281 - 106ms/step
step  20/125 - loss: 0.6781 - acc: 0.7441 - 95ms/step
step  30/125 - loss: 0.6711 - acc: 0.7495 - 91ms/step
step  40/125 - loss: 0.6615 - acc: 0.7406 - 89ms/step
step  50/125 - loss: 0.6463 - acc: 0.7331 - 88ms/step
step  60/125 - loss: 0.6270 - acc: 0.7367 - 87ms/step
step  70/125 - loss: 0.6003 - acc: 0.7490 - 89ms/step
step  80/125 - loss: 0.5345 - acc: 0.7583 - 89ms/step
step  90/125 - loss: 0.4386 - acc: 0.7715 - 89ms/step
step 100/125 - loss: 0.4193 - acc: 0.7818 - 89ms/step
step 110/125 - loss: 0.4506 - acc: 0.7901 - 89ms/step
step 120/125 - loss: 0.4275 - acc: 0.7994 - 89ms/step
step 125/125 - loss: 0.4129 - acc: 0.8025 - 87ms/step
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step  10 - loss: 0.4331 - acc: 0.8945 - 87ms/step
step  20 - loss: 0.4258 - acc: 0.8906 - 71ms/step
step  30 - loss: 0.4290 - acc: 0.8943 - 67ms/step
step  40 - loss: 0.3972 - acc: 0.8990 - 65ms/step
step  50 - loss: 0.4479 - acc: 0.8969 - 64ms/step
step  60 - loss: 0.4437 - acc: 0.8977 - 62ms/step
step  70 - loss: 0.4344 - acc: 0.8981 - 61ms/step
step  80 - loss: 0.4192 - acc: 0.8996 - 60ms/step
Epoch 3/10
step  10/125 - loss: 0.4658 - acc: 0.8930 - 108ms/step
step  20/125 - loss: 0.4429 - acc: 0.8996 - 97ms/step
step  30/125 - loss: 0.4438 - acc: 0.9021 - 94ms/step
step  40/125 - loss: 0.4116 - acc: 0.9066 - 91ms/step
step  50/125 - loss: 0.4034 - acc: 0.9097 - 90ms/step
step  60/125 - loss: 0.3932 - acc: 0.9145 - 89ms/step
step  70/125 - loss: 0.3669 - acc: 0.9170 - 88ms/step
step  80/125 - loss: 0.3966 - acc: 0.9169 - 88ms/step
step  90/125 - loss: 0.3711 - acc: 0.9185 - 89ms/step
step 100/125 - loss: 0.3672 - acc: 0.9186 - 90ms/step
step 110/125 - loss: 0.3988 - acc: 0.9193 - 91ms/step
step 120/125 - loss: 0.3968 - acc: 0.9207 - 91ms/step
step 125/125 - loss: 0.3951 - acc: 0.9209 - 90ms/step
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step  10 - loss: 0.3975 - acc: 0.9227 - 86ms/step
step  20 - loss: 0.3890 - acc: 0.9250 - 70ms/step
step  30 - loss: 0.3910 - acc: 0.9281 - 67ms/step
step  40 - loss: 0.3658 - acc: 0.9309 - 65ms/step
step  50 - loss: 0.3995 - acc: 0.9298 - 63ms/step
step  60 - loss: 0.3896 - acc: 0.9316 - 62ms/step
step  70 - loss: 0.3923 - acc: 0.9304 - 61ms/step
step  80 - loss: 0.3855 - acc: 0.9307 - 60ms/step
Epoch 4/10
step  10/125 - loss: 0.4250 - acc: 0.9203 - 106ms/step
step  20/125 - loss: 0.4077 - acc: 0.9250 - 96ms/step
step  30/125 - loss: 0.3832 - acc: 0.9276 - 93ms/step
step  40/125 - loss: 0.3941 - acc: 0.9307 - 91ms/step
step  50/125 - loss: 0.3826 - acc: 0.9320 - 90ms/step
step  60/125 - loss: 0.3651 - acc: 0.9362 - 89ms/step
step  70/125 - loss: 0.3523 - acc: 0.9390 - 89ms/step
step  80/125 - loss: 0.3846 - acc: 0.9382 - 89ms/step
step  90/125 - loss: 0.3591 - acc: 0.9389 - 89ms/step
step 100/125 - loss: 0.3527 - acc: 0.9387 - 89ms/step
step 110/125 - loss: 0.3755 - acc: 0.9393 - 89ms/step
step 120/125 - loss: 0.3805 - acc: 0.9402 - 88ms/step
step 125/125 - loss: 0.3760 - acc: 0.9397 - 87ms/step
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step  10 - loss: 0.3964 - acc: 0.9336 - 86ms/step
step  20 - loss: 0.3742 - acc: 0.9395 - 70ms/step
step  30 - loss: 0.3723 - acc: 0.9393 - 67ms/step
step  40 - loss: 0.3611 - acc: 0.9424 - 64ms/step
step  50 - loss: 0.3913 - acc: 0.9411 - 63ms/step
step  60 - loss: 0.3518 - acc: 0.9428 - 62ms/step
step  70 - loss: 0.3900 - acc: 0.9414 - 61ms/step
step  80 - loss: 0.3832 - acc: 0.9417 - 60ms/step
Epoch 5/10
step  10/125 - loss: 0.3921 - acc: 0.9336 - 107ms/step
step  20/125 - loss: 0.3867 - acc: 0.9402 - 97ms/step
step  30/125 - loss: 0.3601 - acc: 0.9432 - 93ms/step
step  40/125 - loss: 0.3820 - acc: 0.9453 - 90ms/step
step  50/125 - loss: 0.3696 - acc: 0.9456 - 90ms/step
step  60/125 - loss: 0.3486 - acc: 0.9490 - 89ms/step
step  70/125 - loss: 0.3423 - acc: 0.9511 - 88ms/step
step  80/125 - loss: 0.3612 - acc: 0.9509 - 88ms/step
step  90/125 - loss: 0.3564 - acc: 0.9506 - 88ms/step
step 100/125 - loss: 0.3436 - acc: 0.9503 - 88ms/step
step 110/125 - loss: 0.3599 - acc: 0.9509 - 88ms/step
step 120/125 - loss: 0.3669 - acc: 0.9516 - 87ms/step
step 125/125 - loss: 0.3675 - acc: 0.9512 - 86ms/step
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step  10 - loss: 0.3898 - acc: 0.9453 - 86ms/step
step  20 - loss: 0.3730 - acc: 0.9477 - 72ms/step
step  30 - loss: 0.3753 - acc: 0.9458 - 68ms/step
step  40 - loss: 0.3541 - acc: 0.9490 - 65ms/step
step  50 - loss: 0.3900 - acc: 0.9475 - 63ms/step
step  60 - loss: 0.3464 - acc: 0.9487 - 62ms/step
step  70 - loss: 0.3904 - acc: 0.9475 - 61ms/step
step  80 - loss: 0.3722 - acc: 0.9476 - 60ms/step
Epoch 6/10
step  10/125 - loss: 0.3836 - acc: 0.9391 - 108ms/step
step  20/125 - loss: 0.3795 - acc: 0.9473 - 97ms/step
step  30/125 - loss: 0.3588 - acc: 0.9500 - 93ms/step
step  40/125 - loss: 0.3793 - acc: 0.9506 - 90ms/step
step  50/125 - loss: 0.3588 - acc: 0.9511 - 92ms/step
step  60/125 - loss: 0.3394 - acc: 0.9543 - 92ms/step
step  70/125 - loss: 0.3363 - acc: 0.9563 - 91ms/step
step  80/125 - loss: 0.3525 - acc: 0.9563 - 91ms/step
step  90/125 - loss: 0.3413 - acc: 0.9562 - 90ms/step
step 100/125 - loss: 0.3384 - acc: 0.9560 - 90ms/step
step 110/125 - loss: 0.3526 - acc: 0.9566 - 90ms/step
step 120/125 - loss: 0.3714 - acc: 0.9575 - 90ms/step
step 125/125 - loss: 0.3588 - acc: 0.9570 - 88ms/step
save checkpoint at /home/aistudio/checkpoints/5
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step  10 - loss: 0.3872 - acc: 0.9508 - 86ms/step
step  20 - loss: 0.3607 - acc: 0.9531 - 70ms/step
step  30 - loss: 0.3630 - acc: 0.9505 - 66ms/step
step  40 - loss: 0.3497 - acc: 0.9533 - 64ms/step
step  50 - loss: 0.3802 - acc: 0.9528 - 63ms/step
step  60 - loss: 0.3435 - acc: 0.9534 - 62ms/step
step  70 - loss: 0.3707 - acc: 0.9523 - 61ms/step
step  80 - loss: 0.3673 - acc: 0.9525 - 59ms/step
Epoch 7/10
step  10/125 - loss: 0.3746 - acc: 0.9484 - 108ms/step
step  20/125 - loss: 0.3739 - acc: 0.9539 - 97ms/step
step  30/125 - loss: 0.3486 - acc: 0.9563 - 93ms/step
step  40/125 - loss: 0.3720 - acc: 0.9564 - 90ms/step
step  50/125 - loss: 0.3563 - acc: 0.9567 - 90ms/step
step  60/125 - loss: 0.3360 - acc: 0.9589 - 89ms/step
step  70/125 - loss: 0.3387 - acc: 0.9600 - 88ms/step
step  80/125 - loss: 0.3457 - acc: 0.9594 - 88ms/step
step  90/125 - loss: 0.3298 - acc: 0.9595 - 88ms/step
step 100/125 - loss: 0.3361 - acc: 0.9595 - 88ms/step
step 110/125 - loss: 0.3478 - acc: 0.9600 - 88ms/step
step 120/125 - loss: 0.3602 - acc: 0.9612 - 88ms/step
step 125/125 - loss: 0.3607 - acc: 0.9607 - 86ms/step
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step  10 - loss: 0.3754 - acc: 0.9500 - 89ms/step
step  20 - loss: 0.3592 - acc: 0.9551 - 72ms/step
step  30 - loss: 0.3609 - acc: 0.9542 - 68ms/step
step  40 - loss: 0.3503 - acc: 0.9561 - 65ms/step
step  50 - loss: 0.3831 - acc: 0.9547 - 64ms/step
step  60 - loss: 0.3390 - acc: 0.9549 - 62ms/step
step  70 - loss: 0.3740 - acc: 0.9540 - 61ms/step
step  80 - loss: 0.3720 - acc: 0.9543 - 60ms/step
Epoch 8/10
step  10/125 - loss: 0.3694 - acc: 0.9523 - 105ms/step
step  20/125 - loss: 0.3669 - acc: 0.9582 - 95ms/step
step  30/125 - loss: 0.3429 - acc: 0.9602 - 91ms/step
step  40/125 - loss: 0.3698 - acc: 0.9605 - 89ms/step
step  50/125 - loss: 0.3502 - acc: 0.9611 - 88ms/step
step  60/125 - loss: 0.3374 - acc: 0.9628 - 87ms/step
step  70/125 - loss: 0.3312 - acc: 0.9643 - 86ms/step
step  80/125 - loss: 0.3453 - acc: 0.9639 - 86ms/step
step  90/125 - loss: 0.3283 - acc: 0.9636 - 86ms/step
step 100/125 - loss: 0.3336 - acc: 0.9635 - 86ms/step
step 110/125 - loss: 0.3439 - acc: 0.9641 - 86ms/step
step 120/125 - loss: 0.3661 - acc: 0.9650 - 86ms/step
step 125/125 - loss: 0.3515 - acc: 0.9646 - 84ms/step
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step  10 - loss: 0.3727 - acc: 0.9570 - 86ms/step
step  20 - loss: 0.3530 - acc: 0.9594 - 70ms/step
step  30 - loss: 0.3584 - acc: 0.9583 - 65ms/step
step  40 - loss: 0.3464 - acc: 0.9602 - 63ms/step
step  50 - loss: 0.3773 - acc: 0.9589 - 62ms/step
step  60 - loss: 0.3367 - acc: 0.9590 - 60ms/step
step  70 - loss: 0.3668 - acc: 0.9580 - 60ms/step
step  80 - loss: 0.3629 - acc: 0.9580 - 58ms/step
Epoch 9/10
step  10/125 - loss: 0.3592 - acc: 0.9547 - 132ms/step
step  20/125 - loss: 0.3615 - acc: 0.9609 - 109ms/step
step  30/125 - loss: 0.3395 - acc: 0.9635 - 101ms/step
step  40/125 - loss: 0.3682 - acc: 0.9635 - 96ms/step
step  50/125 - loss: 0.3471 - acc: 0.9639 - 94ms/step
step  60/125 - loss: 0.3349 - acc: 0.9651 - 92ms/step
step  70/125 - loss: 0.3277 - acc: 0.9666 - 91ms/step
step  80/125 - loss: 0.3451 - acc: 0.9665 - 91ms/step
step  90/125 - loss: 0.3267 - acc: 0.9663 - 91ms/step
step 100/125 - loss: 0.3312 - acc: 0.9661 - 90ms/step
step 110/125 - loss: 0.3420 - acc: 0.9665 - 90ms/step
step 120/125 - loss: 0.3637 - acc: 0.9674 - 90ms/step
step 125/125 - loss: 0.3468 - acc: 0.9669 - 88ms/step
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step  10 - loss: 0.3697 - acc: 0.9570 - 85ms/step
step  20 - loss: 0.3520 - acc: 0.9602 - 70ms/step
step  30 - loss: 0.3566 - acc: 0.9578 - 66ms/step
step  40 - loss: 0.3448 - acc: 0.9607 - 63ms/step
step  50 - loss: 0.3828 - acc: 0.9589 - 62ms/step
step  60 - loss: 0.3344 - acc: 0.9595 - 61ms/step
step  70 - loss: 0.3619 - acc: 0.9588 - 60ms/step
step  80 - loss: 0.3618 - acc: 0.9590 - 58ms/step
Epoch 10/10
step  10/125 - loss: 0.3575 - acc: 0.9586 - 110ms/step
step  20/125 - loss: 0.3568 - acc: 0.9645 - 97ms/step
step  30/125 - loss: 0.3379 - acc: 0.9659 - 93ms/step
step  40/125 - loss: 0.3679 - acc: 0.9654 - 91ms/step
step  50/125 - loss: 0.3738 - acc: 0.9645 - 90ms/step
step  60/125 - loss: 0.4039 - acc: 0.9525 - 89ms/step
step  70/125 - loss: 0.3904 - acc: 0.9396 - 88ms/step
step  80/125 - loss: 0.4073 - acc: 0.9311 - 88ms/step
step  90/125 - loss: 0.3616 - acc: 0.9302 - 88ms/step
step 100/125 - loss: 0.3367 - acc: 0.9303 - 89ms/step
step 110/125 - loss: 0.3530 - acc: 0.9316 - 89ms/step
step 120/125 - loss: 0.3690 - acc: 0.9343 - 89ms/step
step 125/125 - loss: 0.3408 - acc: 0.9346 - 87ms/step
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step  10 - loss: 0.3913 - acc: 0.9414 - 88ms/step
step  20 - loss: 0.3812 - acc: 0.9445 - 71ms/step
step  30 - loss: 0.3584 - acc: 0.9464 - 66ms/step
step  40 - loss: 0.3487 - acc: 0.9480 - 63ms/step
step  50 - loss: 0.3882 - acc: 0.9473 - 62ms/step
step  60 - loss: 0.3747 - acc: 0.9478 - 61ms/step
step  70 - loss: 0.3802 - acc: 0.9467 - 60ms/step
step  80 - loss: 0.3563 - acc: 0.9471 - 58ms/step
save checkpoint at /home/aistudio/checkpoints/final
启动VisualDL查看训练过程可视化结果
启动步骤:

1、切换到本界面左侧「可视化」
2、日志文件路径选择 'visualdl'
3、点击「启动VisualDL」后点击「打开VisualDL」,即可查看可视化结果: Accuracy和Loss的实时变化趋势如下: 
In [12]
results = model.evaluate(dev_loader)
print("Finally test acc: %.5f" % results['acc'])
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step  10 - loss: 0.3913 - acc: 0.9414 - 90ms/step
step  20 - loss: 0.3812 - acc: 0.9445 - 74ms/step
step  30 - loss: 0.3584 - acc: 0.9464 - 72ms/step
step  40 - loss: 0.3487 - acc: 0.9480 - 71ms/step
step  50 - loss: 0.3882 - acc: 0.9473 - 70ms/step
step  60 - loss: 0.3747 - acc: 0.9478 - 70ms/step
step  70 - loss: 0.3802 - acc: 0.9467 - 69ms/step
step  80 - loss: 0.3563 - acc: 0.9471 - 67ms/step
Finally test acc: 0.94776
预测
In [13]
print(type(test_ds))
label_map = {0: 'negative', 1: 'positive'}
results = model.predict(test_loader, batch_size=128)[0]
predictions = []

for batch_probs in results:
    # 映射分类label
    idx = np.argmax(batch_probs, axis=-1)
    idx = idx.tolist()
    labels = [label_map[i] for i in idx]
    predictions.extend(labels)
<class 'paddlenlp.datasets.dataset.IterDataset'>
Predict begin...
step   2 - 111ms/step
step   4 - 83ms/step 
step   6 - 92ms/step
step   8 - 92ms/step
step  10 - 85ms/step
step  12 - 80ms/step
step  14 - 77ms/step
step  16 - 75ms/step
step  18 - 72ms/step
step  20 - 71ms/step
step  22 - 70ms/step
step  24 - 69ms/step
step  26 - 68ms/step
step  28 - 67ms/step
step  30 - 66ms/step
step  32 - 66ms/step
step  34 - 65ms/step
step  36 - 65ms/step
step  38 - 63ms/step
step  40 - 61ms/step
step  41 - 60ms/step
Predict samples: 5248
In [15]
# 看看预测数据前5个样例分类结果
for i in test_ds:
    print(i)
    break
    
for idx, data in enumerate(test_ds):
    if idx < 10:
        print(type(data))
        print('Data: {} \t Label: {}'.format(data[0], predictions[idx]))
([440695, 768730, 810400, 371677, 1106339, 995733, 237834, 891203, 1211275, 686770, 1106339, 440695, 117037, 830171, 475327, 1093154], array(16), array(0))
<class 'tuple'>
Data: [440695, 768730, 810400, 371677, 1106339, 995733, 237834, 891203, 1211275, 686770, 1106339, 440695, 117037, 830171, 475327, 1093154] 	 Label: negative
<class 'tuple'>
Data: [471791, 807117, 823116, 1, 891564, 1057229, 1, 1106326, 653811, 176187, 877695, 958129, 173188, 986608, 1106339, 781255, 830165, 213378, 1211275, 1057229, 36026, 1106326, 1106328, 1033199, 877695, 749968, 1106339, 147848, 830171, 489131, 958129, 1106339, 898857, 1131492, 930707, 173188, 399212, 1, 1222901, 508478, 823066, 1106326, 1106328, 651025, 781255, 252859, 860530, 681075, 1106339, 453143, 1, 650296, 173188, 681075, 173401, 830171, 997375, 819627, 283810, 1106326, 1106328] 	 Label: negative
<class 'tuple'>
Data: [451938, 696658, 940533, 1150076, 936106, 308649, 157793, 718272, 660347, 40882, 86562, 510099, 1106339, 1050713, 1188905, 711114, 69882, 686776, 961609, 1, 1106339, 178687, 1106328, 1057229, 511894, 1103825, 1106328, 606478, 703976, 823066, 1113416, 1152575, 1211275, 364716, 1092505, 877695, 1106339, 979319, 1106326, 1175853, 4783, 267842, 173188, 297759, 1106339, 786065, 1106339, 795996, 270281, 797371, 179568, 974589, 974589, 1, 1093512, 578978, 420194, 668761, 4783, 738354, 836929, 52172, 1106328, 713424, 830171, 35393, 898857, 472344, 1106339, 157793, 47104, 1191873, 718272, 389733, 738354, 952595, 1106339, 306239, 898857, 616980, 1106326, 1106328, 1106326, 1106328, 789340, 696658, 711103, 859353, 202379, 472344, 179582, 1106326, 26156, 1093873, 830171, 8885, 395173, 1106339, 592501, 202370, 1107855, 823066, 974589, 974589, 940533, 254014, 145550, 453139, 1193342, 940533, 1050739, 1106339, 1135123, 1, 602020, 179568] 	 Label: negative
<class 'tuple'>
Data: [1086178, 173188, 1188905, 650296, 389733, 1188905, 823116, 669226, 1106339, 669226, 604466, 836929, 52172, 1106339, 311074, 881196, 1044637, 690740, 1, 1106339, 766694, 1050713, 823066, 511894, 1, 1, 173188, 690740, 224268, 257857, 240658, 1106339, 874019, 1238094, 1057229, 220620, 173188, 1106339, 1238700, 758560, 1208182, 823066, 1106339, 975313, 1232128, 836929, 567563, 936106, 237795, 282921, 282921, 282921, 282921, 282921, 282921] 	 Label: negative
<class 'tuple'>
Data: [365921, 932416, 663024, 388662, 1106328, 1106328, 1106328, 1083318, 823066, 1216915, 819627, 453139, 1216915, 960087, 159950, 173188, 753452, 1106328] 	 Label: positive
<class 'tuple'>
Data: [384261, 305803, 173188, 1183540, 823116, 1106339, 847645, 389733, 563178, 1053035, 666970, 81913, 475331, 1203115, 991057, 1106339, 479740, 88185, 859353, 1111556, 347427, 283810, 169379, 769807, 305838, 322845, 218407, 955960, 1106339, 693836, 686770, 4782, 724601, 686770, 4782, 107509, 553338, 1, 1042186, 1110722, 544943, 4782, 881173, 1183540, 823116, 173188, 1173300, 4782, 881173, 1183540, 769807, 173188, 1173300, 4783] 	 Label: negative
<class 'tuple'>
Data: [376180, 639231, 261577, 656622, 894574, 1106328, 977151, 905376, 137984, 1057229, 250355, 173188, 629586, 876228, 1106339, 418471, 283810, 1211275, 799171, 1106339, 724601, 261577, 165869, 935402, 1106339, 860985, 261577, 1093154, 1106339, 820003, 543370, 894574, 1106328, 7215, 690740] 	 Label: positive
<class 'tuple'>
Data: [1066901, 1083318, 877695, 979812, 1173329, 173188, 514629, 1106321, 1083318, 859353, 1083318, 1096990, 1083318, 263393, 818554, 1110721, 475371, 1192261, 173188, 326984, 837335, 659913, 832658, 832658, 491743, 1106328, 1173329, 1106328, 1106328, 1106328, 1066901, 1083318, 877695, 979812, 1, 173188, 514629, 68233, 1106321, 473329, 1, 1, 1106339, 957006, 823153, 930660, 794154, 183395, 382678, 1, 105959, 173188, 726352, 1106339, 166181, 473329, 928384, 453139, 4777, 1053978, 4778, 1, 1106339, 1093157, 645827, 224268, 534222, 1, 1022416, 1, 1084561, 1, 1106339, 1, 668587, 457118, 1, 1204878, 651025, 1, 656594, 696404, 66941, 742636, 416380, 832658, 832658, 1, 1106328, 1, 1106328, 1106328, 876258, 1106328, 1106328, 1106328, 386946, 1, 353572, 400234, 173188, 709767, 1106321, 1066901, 1083318, 877695, 396639, 173188, 514629, 1106321, 453971, 1121397, 522245, 1211275, 1, 823066, 1106339, 684489, 802932, 1, 1106339, 767021, 69882, 823066, 768905, 1, 1152574, 1106322, 1, 686770, 1106328, 1106328, 1066901, 1083318, 877695, 396639, 173188, 514629, 1106321, 917345, 1, 4782, 917345, 726358, 1140069, 976571, 312864, 4782, 917345, 726358, 1, 4782, 917345, 726358, 1095073, 1, 173188, 560962, 1034586, 4782, 917345, 726358, 1, 173188, 1, 1087481, 853920, 938841, 961609, 743669, 107691, 1106322, 1, 244170, 1106328, 1106328, 1155475, 599615, 890242, 954040, 1, 242649, 173188, 298309, 1057229, 803636, 4782, 1, 173188, 298309, 1106339, 177893, 4782, 1, 173188, 1211275, 1084488, 602020, 1093154, 881196, 1, 1106328, 1106328, 242649, 173188, 298309, 1057229, 453148, 881196, 803636, 173188, 1071540, 1106328, 1106328, 1106328] 	 Label: negative
<class 'tuple'>
Data: [401981, 738738, 884754, 881187, 261577, 237795, 1106339, 497327, 690740, 297759, 261577, 1250380, 1106328, 810400, 429786, 686770, 1106328, 323771, 518954, 173188, 230599, 4782] 	 Label: negative
<class 'tuple'>
Data: [169379, 52794, 173188, 504130, 440821, 977875, 552526, 282092, 823066, 1106339, 867512, 163653, 768547, 436269, 25065, 4783, 178053, 1057229, 261577, 1250380, 1106339, 1121661, 663813, 898857, 254013, 40261, 173188, 650322, 179568, 4783, 860985, 237834, 693836, 806474, 1057229, 759049, 560962, 231110, 283810, 833337, 389733, 830171, 939335, 1106339, 312183, 284408, 453172, 1006966, 231110, 173188, 284408, 1106339, 830172, 389733, 261577, 1225280, 823066, 4783, 4783, 724601, 224268, 1057229, 23432, 318302, 4783] 	 Label: negative
这里只采用了一个基础的模型,就得到了较高的的准确率。

可以试试预训练模型,能得到更好的效果!参考如何通过预训练模型Fine-tune下游任务

PaddleNLP 更多项目
瞧瞧怎么使用PaddleNLP内置数据集-基于seq2vec的情感分析
如何通过预训练模型Fine-tune下游任务
使用BiGRU-CRF模型完成快递单信息抽取
使用预训练模型ERNIE优化快递单信息抽取
使用Seq2Seq模型完成自动对联
使用预训练模型ERNIE-GEN实现智能写诗
使用TCN网络完成新冠疫情病例数预测
使用预训练模型完成阅读理解
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值