词频(TF)
单词在句子中出现的次数除以句子的总词数称为词频。即一个单词在一个句子中出现的频率。词频相比单词的出现次数可以更加客观的评估单词对一句话的语义的贡献度。词频越高,对语义的贡献度越大。对词袋矩阵归一化即可得到词频。
案例:对词袋矩阵进行归一化
import nltk.tokenize as tk
import sklearn.feature_extraction.text as ft
import sklearn.preprocessing as sp
doc = 'The brown dog is running. ' \
'The black dog is in the black room. ' \
'Running in the room is forbidden.'
print(doc)
print('-' * 15)
sentences = tk.sent_tokenize(doc)
print(sentences)
print('-' * 15)
cv = ft.CountVectorizer()
bow = cv.fit_transform(sentences).toarray()
print(bow)
print('-' * 15)
words = cv.get_feature_names()
print(words)
print('-' * 15)
tf = sp.normalize(bow, norm='l1')
print(tf)
文档频率(DF)
含有某个单词的文档样本数/总文档样本数
逆文档频率(IDF)
总样本数/含有某个单词的样本数
词频-逆文档频率(TF-IDF)
词频矩阵中的每一个元素乘以相应单词的逆文档频率,其值越大说明该词对样本语义的贡献越大,根据每个词的贡献力度,构建学习模型。
获取词频逆文档频率(TF-IDF)矩阵相关API:
# 获取词袋模型
cv = ft.CountVectorizer()
bow = cv.fit_transform(sentences).toarray()
# 获取TF-IDF模型训练器
tt = ft.TfidfTransformer()
tfidf = tt.fit_transform(bow).toarray()
案例:获取TF_IDF矩阵:
import nltk.tokenize as tk
import sklearn.feature_extraction.text as ft
doc = 'The brown dog is running. ' \
'The black dog is in the black room. ' \
'Running in the room is forbidden.'
print(doc)
print('-' * 15)
sentences = tk.sent_tokenize(doc)
print(sentences)
print('-' * 15)
cv = ft.CountVectorizer()
bow = cv.fit_transform(sentences).toarray()
print(bow)
print('-' * 15)
words = cv.get_feature_names()
print(words)
print('-' * 15)
tt = ft.TfidfTransformer()
tfidf = tt.fit_transform(bow).toarray()
print(tfidf)
print('-' * 15)
文本分类(主题识别)
使用给定的文本数据集进行主题识别训练,自定义测试集测试模型准确性。
案例:
import sklearn.datasets as sd
import sklearn.feature_extraction.text as ft
import sklearn.naive_bayes as nb
train = sd.load_files('../data/20news', encoding='latin1',
shuffle=True, random_state=7)
# 20news 下的文件夹名即是相应子文件的主题类别名
# train.data 返回每个文件的字符串内容
# train.target 返回每个文件的父目录名(主题类别名)
train_data = train.data
train_y = train.target
categories = train.target_names
cv = ft.CountVectorizer()
train_bow = cv.fit_transform(train_data)
tt = ft.TfidfTransformer()
train_x = tt.fit_transform(train_bow)
model = nb.MultinomialNB()
model.fit(train_x, train_y)
test_data = [
'The curveballs of right handed pitchers tend to curve to the left',
'Caesar cipher is an ancient form of encryption',
'This two-wheeler is really good on slippery roads']
test_bow = cv.transform(test_data)
test_x = tt.transform(test_bow)
pred_test_y = model.predict(test_x)
for sentence, index in zip(test_data, pred_test_y):
print(sentence, '->', categories[index])