文本表示方法
1.One-hot
我:[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
爱:[0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]
…
海:[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1]
2.Bag of Words
每个文档的字/词可以使用其他出现次数来进行表示。
from sklearn.feature_extraction.text import CountVectorizer
corpus = [
'This is the first document.',
'This document is the second document.',
'And this is the third one.',
'Is this the first document?',
]
vectorizer = CountVectorizer()
vectorizer.fit_transform(corpus).toarray()
array([[0, 1, 1, 1, 0, 0, 1, 0, 1],
[0, 2, 0, 1, 0, 1, 1, 0, 1],
[1, 0, 0, 1, 1, 0, 1, 1, 1],
[0, 1, 1, 1, 0, 0, 1, 0, 1]], dtype=int64)
3.N-gram
N-gram与Count Vectors类似,不过加入了相邻单词组合成为新的单词,并进行计数。
4.TF-IDF
TF-IDF分数由两部分组成:第一部分是词语频率(TF),第二部分是逆文档频(IDF)。
其中语料库中文档总数除以含有该语料的文档数量,然后再取对数就是逆文档频率。
TF(t)=该词语在当前文档出现的次数 / 当前文档中词语的总数
IDF(t)=log_e(文档总数 / 出现该词语的文档总数)
基于机器学习的文本分类
Count Vectors + RidgeClassifier
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.linear_model import RidgeClassifier
from sklearn.metrics import f1_score
train_df=pd.read_csv('train_set.csv',sep='\t',nrows=15000)
vectorizer=CountVectorizer(max_features=3000)
train_test=vectorizer.fit_transform(train_df['text'])
clf=RidgeClassifier()
clf.fit(train_test[:10000],train_df['label'].values[:10000])
val_pred=clf.predict(train_test[10000:])
print(f1_score(train_df['label'].values[10000:],val_pred,average='macro'))
0.65441877581244
vectorizer=CountVectorizer(max_features=5000)
train_test=vectorizer.fit_transform(train_df['text'])
clf=RidgeClassifier()
clf.fit(train_test[:10000],train_df['label'].values[:10000])
val_pred=clf.predict(train_test[10000:])
print(f1_score(train_df['label'].values[10000:],val_pred,average='macro'))
0.6548047305167468
TF-IDF + RidgeClassifier
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf=TfidfVectorizer(ngram_range=(1,3),max_features=3000)
train_test = tfidf.fit_transform(train_df['text'])
clf=RidgeClassifier()
clf.fit(train_test[:10000],train_df['label'].values[:10000])
val_pred=clf.predict(train_test[10000:])
print(f1_score(train_df['label'].values[10000:],val_pred,average='macro'))
0.8719372173702