Base
import pandas as pd
from sklearn.metrics import f1_score
import fasttext
# 转换为fasttext需要的格式
train_df = pd.read_csv('./data/train_set.csv', sep='\t', nrows=15000)
train_df['label_ft'] = '__label__' + train_df['label'].astype(str)
# 前10000个转换作为训练集
train_df[['text','label_ft']].iloc[:-5000].to_csv('train.csv', index=None, header=None, sep='\t')
train_df[['text','label_ft']].iloc[-5000:].to_csv('test.csv', index=None, header=None, sep='\t')
model = fasttext.train_supervised('train.csv', lr=1.0, wordNgrams=2,
verbose=2, minCount=1, epoch=25, loss="hs")
# 自动超参搜索 太慢了。。
# model = fasttext.train_supervised('train.csv', autotuneValidationFile='test.csv')
val_pred = [model.predict(x)[0][0].split('__')[-1] for x in train_df.iloc[-5000:]['text']]
# .split('__')就是指定__为分隔符它进行切片 再取最后一个就是预测的label
print(f1_score(train_df['label'].values[-5000:].astype(str), val_pred, average='macro'))
f1_score:0.8249394268327849
此时数据量比较小得分为0.82,当不断增加训练集数量时,FastText的精度也会不断增加5w条训练样本时,验证集得分可以到0.89-0.90左右。
10折交叉验证
使用sklearn的KFold
from sklearn.model_selection import KFold
kf = KFold(n_splits=10)
kf.get_n_splits(train_df)
for train_index, test_index in kf.split(train_df):
print("TRAIN:", train_index, "TEST:", test_index)
以150个数据为例的输出:
TRAIN: [ 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68
69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86
87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104
105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122
123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140
141 142 143 144 145 146 147 148 149] TEST: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14]
TRAIN: [ 0 1 2 3 4 5 6