Fasttext使用
fasttext安装
安装地址
任务代码:
import pandas as pd
from sklearn.metrics import f1_score
# 转换为FastText需要的格式
train_df = pd.read_csv('./data/train_set.csv', sep='\t', nrows=15000)
train_df['label_ft'] = '__label__' + train_df['label'].astype(str)
train_df[['text','label_ft']].iloc[:-5000].to_csv('train.csv', index=None, header=None, sep='\t')
import fasttext
model = fasttext.train_supervised('train.csv', lr=1.0, wordNgrams=2,
verbose=2, minCount=1, epoch=25, loss="hs")
val_pred = [model.predict(x)[0][0].split('__')[-1] for x in train_df.iloc[-5000:]['text']]
print(f1_score(train_df['label'].values[-5000:].astype(str), val_pred, average='macro'))
默认__label__为预测值前缀
训练模型
trainDataFile = 'train.txt'
classifier = fasttext.train_supervised(
input = trainDataFile,
label_prefix = '__label__',
dim = 256,
epoch = 50,
lr = 1,
lr_update_rate = 50,
min_count = 3,
loss = 'softmax',
word_ngrams = 2,
bucket = 1000000)
classifier.save_model("Model.bin")
使用fastText进行预测
testDataFile = 'test.txt'
classifier = fasttext.load_model('Model.bin')
result = classifier.test(testDataFile)
print '测试集上数据量', result[0]
print '测试集上准确率', result[1]
print '测试集上召回率', result[2]
Bag of tricks for efficient text classification
1)分层softmax:对于类别过多的类目,fastText并不是使用的原生的softmax过交叉熵,而是使用的分层softmax,这样会大大提高模型的训练和预测的速度。
2)n-grams:fastText使用了字符级别的n-grams来表示一个单词。对于单词“apple”,假设n的取值为则它的trigram有“<ap”, “app”, “ppl”, “ple”, “le>”
其中,<表示前缀,>表示后缀。于是,我们可以用这些trigram来表示“apple”这个单词,进一步,我们可以用这5个trigram的向量叠加来表示“apple”的词向量。
这带来两点好处:
- 对于低频词生成的词向量效果会更好。因为它们的n-gram可以和其它词共享。
- 对于训练词库之外的单词,仍然可以构建它们的词向量。我们可以叠加它们的字符级n-gram向量
可选参数
The following arguments are mandatory:
-input training file path
-output output file path
The following arguments are optional:
-verbose verbosity level [2]
The following arguments for the dictionary are optional:
-minCount minimal number of word occurrences [1]
-minCountLabel minimal number of label occurrences [0]
-wordNgrams max length of word ngram [1]
-bucket number of buckets [2000000]
-minn min length of char ngram [0]
-maxn max length of char ngram [0]
-t sampling threshold [0.0001]
-label labels prefix [__label__]
The following arguments for training are optional:
-lr learning rate [0.1]
-lrUpdateRate change the rate of updates for the learning rate [100]
-dim size of word vectors [100]
-ws size of the context window [5]
-epoch number of epochs [5]
-neg number of negatives sampled [5]
-loss loss function {ns, hs, softmax} [softmax]
-thread number of threads [12]
-pretrainedVectors pretrained word vectors for supervised learning []
-saveOutput whether output params should be saved [0]
The following arguments for quantization are optional:
-cutoff number of words and ngrams to retain [0]
-retrain finetune embeddings if a cutoff is applied [0]
-qnorm quantizing the norm separately [0]
-qout quantizing the classifier [0]
-dsub size of each sub-vector [2]