Task4 基于深度学习的文本分类1

Task4 基于深度学习的文本分类1

在上一章节,我们使用传统机器学习算法来解决了文本分类问题,从本章开始我们将尝试使用深度学习方法。

基于深度学习的文本分类

与传统机器学习不同,深度学习既提供特征提取功能,也可以完成分类的功能。从本章开始我们将学习如何使用深度学习来完成文本表示。

学习目标

  • 学习FastText的使用和基础原理
  • 学会使用验证集进行调参

文本表示方法 Part2

现有文本表示方法的缺陷

在上一章节,我们介绍几种文本表示方法:

  • One-hot
  • Bag of Words
  • N-gram
  • TF-IDF

也通过sklean进行了相应的实践,相信你也有了初步的认知。但上述方法都或多或少存在一定的问题:转换得到的向量维度很高,需要较长的训练实践;没有考虑单词与单词之间的关系,只是进行了统计。

与这些表示方法不同,深度学习也可以用于文本表示,还可以将其映射到一个低纬空间。其中比较典型的例子有:FastText、Word2Vec和Bert。在本章我们将介绍FastText,将在后面的内容介绍Word2Vec和Bert。

FastText

FastText是一种典型的深度学习词向量的表示方法,它非常简单通过Embedding层将单词映射到稠密空间,然后将句子中所有的单词在Embedding空间中进行平均,进而完成分类操作。

所以FastText是一个三层的神经网络,输入层、隐含层和输出层。

fast_text

下图是使用keras实现的FastText网络结构:

keras_fasttext

FastText在文本分类任务上,是优于TF-IDF的:

  • FastText用单词的Embedding叠加获得的文档向量,将相似的句子分为一类
  • FastText学习到的Embedding空间维度比较低,可以快速进行训练

如果想深度学习,可以参考论文:

Bag of Tricks for Efficient Text Classification, https://arxiv.org/abs/1607.01759

基于FastText的文本分类

FastText可以快速的在CPU上进行训练,最好的实践方法就是官方开源的版本:
https://github.com/facebookresearch/fastText/tree/master/python

  • pip安装
pip install fasttext
  • 源码安装
git clone https://github.com/facebookresearch/fastText.git
cd fastText
sudo pip install .

两种安装方法都可以安装,如果你是初学者可以优先考虑使用pip安装。

  • 分类模型
import pandas as pd
from sklearn.metrics import f1_score

# 转换为FastText需要的格式
train_df = pd.read_csv('../input/train_set.csv', sep='\t', nrows=15000)
train_df['label_ft'] = '__label__' + train_df['label'].astype(str)
train_df[['text','label_ft']].iloc[:-5000].to_csv('train.csv', index=None, header=None, sep='\t')

import fasttext
model = fasttext.train_supervised('train.csv', lr=1.0, wordNgrams=2, 
                                  verbose=2, minCount=1, epoch=25, loss="hs")

val_pred = [model.predict(x)[0][0].split('__')[-1] for x in train_df.iloc[-5000:]['text']]
print(f1_score(train_df['label'].values[-5000:].astype(str), val_pred, average='macro'))
# 0.82

此时数据量比较小得分为0.82,当不断增加训练集数量时,FastText的精度也会不断增加5w条训练样本时,验证集得分可以到0.89-0.90左右。

如何使用验证集调参

在使用TF-IDF和FastText中,有一些模型的参数需要选择,这些参数会在一定程度上影响模型的精度,那么如何选择这些参数呢?

  • 通过阅读文档,要弄清楚这些参数的大致含义,那些参数会增加模型的复杂度
  • 通过在验证集上进行验证模型精度,找到模型在是否过拟合还是欠拟合

train_val

这里我们使用10折交叉验证,每折使用9/10的数据进行训练,剩余1/10作为验证集检验模型的效果。这里需要注意每折的划分必须保证标签的分布与整个数据集的分布一致。

label2id = {}
for i in range(total):
    label = str(all_labels[i])
    if label not in label2id:
        label2id[label] = [i]
    else:
        label2id[label].append(i)

通过10折划分,我们一共得到了10份分布一致的数据,索引分别为0到9,每次通过将一份数据作为验证集,剩余数据作为训练集,获得了所有数据的10种分割。不失一般性,我们选择最后一份完成剩余的实验,即索引为9的一份做为验证集,索引为1-8的作为训练集,然后基于验证集的结果调整超参数,使得模型性能更优。

本章小结

本章介绍了FastText的原理和基础使用,并进行相应的实践。然后介绍了通过10折交叉验证划分数据集。

本章作业

  • 阅读FastText的文档,尝试修改参数,得到更好的分数
  • 基于验证集的结果调整超参数,使得模型性能更优
!pip install --user fasttext
!pip install --user sklearn
import torch
import sklearn
import fasttext
import pandas as pd
from sklearn.metrics import f1_score
Requirement already satisfied: fasttext in /data/nas/workspace/envs/python3.6/site-packages
Requirement already satisfied: setuptools>=0.7.0 in /opt/conda/lib/python3.6/site-packages (from fasttext)
Collecting pybind11>=2.2 (from fasttext)
  Downloading https://mirrors.aliyun.com/pypi/packages/89/e3/d576f6f02bc75bacbc3d42494e8f1d063c95617d86648dba243c2cb3963e/pybind11-2.5.0-py2.py3-none-any.whl (296kB)
[K    100% |################################| 296kB 99.3MB/s 
[?25hRequirement already satisfied: numpy in /opt/conda/lib/python3.6/site-packages (from fasttext)
Installing collected packages: pybind11
Successfully installed pybind11-2.5.0
[33mYou are using pip version 9.0.1, however version 20.1.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.[0m
Requirement already satisfied: sklearn in /data/nas/workspace/envs/python3.6/site-packages
Requirement already satisfied: scikit-learn in /data/nas/workspace/envs/python3.6/site-packages (from sklearn)
Requirement already satisfied: joblib>=0.11 in /data/nas/workspace/envs/python3.6/site-packages (from scikit-learn->sklearn)
Requirement already satisfied: threadpoolctl>=2.0.0 in /data/nas/workspace/envs/python3.6/site-packages (from scikit-learn->sklearn)
Requirement already satisfied: numpy>=1.13.3 in /opt/conda/lib/python3.6/site-packages (from scikit-learn->sklearn)
Requirement already satisfied: scipy>=0.19.1 in /opt/conda/lib/python3.6/site-packages (from scikit-learn->sklearn)
[33mYou are using pip version 9.0.1, however version 20.1.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.[0m


/opt/conda/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  return f(*args, **kwds)
/opt/conda/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  return f(*args, **kwds)
torch.rand(5,3)
tensor([[0.4709, 0.6568, 0.3004],
        [0.5398, 0.8707, 0.2022],
        [0.9003, 0.6427, 0.3421],
        [0.6209, 0.2710, 0.3932],
        [0.0468, 0.6451, 0.8993]])
torch.cuda.is_available()
False
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import RidgeClassifier
from sklearn.metrics import f1_score
train_df = pd.read_csv('./Demo/DataSets/train_set.csv', sep='\t', nrows=15000)
tfidf = TfidfVectorizer(ngram_range=(1,3), max_features=3000)
train_test = tfidf.fit_transform(train_df['text'])

clf = RidgeClassifier()
clf.fit(train_test[:10000], train_df['label'].values[:10000])

val_pred = clf.predict(train_test[10000:])
print(f1_score(train_df['label'].values[10000:], val_pred, average='macro'))
0.8721598830546126
for i in range(1,6):
    for j in [1000,2000,3000,4000,5000]:
        tfidf = TfidfVectorizer(ngram_range=(1,i), max_features=j)
        train_test = tfidf.fit_transform(train_df['text'])

        clf = RidgeClassifier()
        clf.fit(train_test[:10000], train_df['label'].values[:10000])

        val_pred = clf.predict(train_test[10000:])
        print(f1_score(train_df['label'].values[10000:], val_pred, average='macro'))
0.835944644945302
0.8607807562975801
0.858329649339088
0.8601916764212559
0.8605835796445032
0.8288900927279318
0.8584782097110735
0.8719465729628795
0.8794638920291346
0.886402018449437
0.8270776630718544
0.8603842642428617
0.8721598830546126
0.8753945850878357
0.8850817067811825
0.8257413191799515
0.8620590192116346
0.8738210287555335
0.8745264965071714
0.8849155025217313
0.8274187942167942
0.8627486150249901
0.8753002643279153
0.8751515660504635
0.8853177666563177
tfidf = TfidfVectorizer(ngram_range=(1,2), max_features=5000)
train_test = tfidf.fit_transform(train_df['text'])
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn import svm
mean=[]
std=[]
kfold=KFold(n_splits=10,random_state=22)
C=[0.9,1.0]
gamma=[0.1,0.2]
kernel=['linear','rbf']
for i in C:
    for j in gamma:
        for k in kernel:
            result=cross_val_score(svm.SVC(kernel=k,C=i,gamma=j),train_test[:10000], train_df['label'].values[:10000],cv=kfold,scoring='accuracy')
            print(result.mean())
            mean.append(result.mean())
            print(result.std())
            std.append(result.std())
Frame=pd.DataFrame({'mean':mean,'std':std})
/home/admin/.local/lib/python3.6/site-packages/sklearn/model_selection/_split.py:297: FutureWarning: Setting a random_state has no effect since shuffle is False. This will raise an error in 0.24. You should leave random_state to its default (None), or set shuffle=True.
  FutureWarning


0.9072000000000001
0.005878775382679633
model=svm.SVC(kernel='linear',C=1.0,gamma=0.1)
model.fit(train_test[:10000], train_df['label'].values[:10000])
val_pred = model.predict(train_test[10000:])
print(f1_score(train_df['label'].values[10000:], val_pred, average='macro'))
0.8864261905222051
# 转换为FastText需要的格式
train_df['label_ft'] = '__label__' + train_df['label'].astype(str)
train_df[['text','label_ft']].iloc[:-5000].to_csv('train.csv', index=None, header=None, sep='\t')

import fasttext
model = fasttext.train_supervised('train.csv', lr=1.0, wordNgrams=2, 
                                  verbose=2, minCount=1, epoch=25, loss="hs")

val_pred = [model.predict(x)[0][0].split('__')[-1] for x in train_df.iloc[-5000:]['text']]
print(f1_score(train_df['label'].values[-5000:].astype(str), val_pred, average='macro'))
# 0.82
0.8308617580074691
for i in range(1,6):
    for j in [0.02,0.2,0.5,0.8,1.0]:
        model = fasttext.train_supervised('train.csv', lr=j, wordNgrams=i, 
                                  verbose=2, minCount=1, epoch=25, loss="hs")

        val_pred = [model.predict(x)[0][0].split('__')[-1] for x in train_df.iloc[-5000:]['text']]
        print(f1_score(train_df['label'].values[-5000:].astype(str), val_pred, average='macro'))
0.278056819712186
0.7739239121332525
0.785430861197419
0.7921402714743275
0.7899217471037233
0.13516783887356523
0.7630411643540896
0.8151372827202517
0.8224846573199776
0.820457631831283
0.07178635165344592
0.7329173971223006
0.8037045406088367
0.8135547664179137
0.823931413942575
0.6457710287941438
0.7908108214796642
0.8168156592482193
0.8197175843059298
0.024066902663277188
0.5753718837654132
0.7689673796987921
0.8035836766086517
0.815450097798096
values=[]
train_df1=train_df[:10000]
for i in range(2,6):
    for j in [0.2,0.5,0.8,1.0]:
        for k in range(0,10000,1000):
            train=pd.concat([train_df1[['text','label_ft']].iloc[:k],train_df1[['text','label_ft']].iloc[k+1000:]])
            train=train.to_csv('train.csv', index=None, header=None, sep='\t')
            model = fasttext.train_supervised('train.csv', lr=j, wordNgrams=i, 
                                  verbose=2, minCount=1, epoch=25, loss="hs")
            values.extend([model.predict(x)[0][0].split('__')[-1] for x in train_df1.iloc[k:k+1000]['text']])
        print(f1_score(train_df1['label'].values.astype(str), values, average='macro'))
        values=[]
0.7540442074816985
0.8182706214401191
0.8243344129663084
0.8249359546168363

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值