klearn 文本分类_用scipy(scikit-learn)做文本分类

文本挖掘的paper没找到统一的benchmark,只好自己跑程序,走过路过的前辈如果知道20newsgroups或者其它好用的公共数据集的分类(最好要所有类分类结果,全部或取部分特征无所谓)麻烦留言告知下现在的benchmark,万谢!

分为以下几个过程:

加载数据集

提feature

分类 Naive Bayes

KNN

SVM 聚类

说明: scipy官网上有参考,但是看着有点乱,而且有bug。本文中我们分块来看。

Environment:Python 2.7 + Scipy (scikit-learn)

1.加载数据集

从 20news-19997.tar.gz下载数据集,解压到scikit_learn_data文件夹下,加载数据,详见code注释。

#first extract the 20 news_group dataset to /scikit_learn_data

from sklearn.datasets import fetch_20newsgroups

#all categories

#newsgroup_train = fetch_20newsgroups(subset='train')

#part categories

categories = ['comp.graphics',

'comp.os.ms-windows.misc',

'comp.sys.ibm.pc.hardware',

'comp.sys.mac.hardware',

'comp.windows.x'];

newsgroup_train = fetch_20newsgroups(subset = 'train',categories = categories);

可以检验是否load好了:

#print category names

from pprint import pprint

pprint(list(newsgroup_train.target_names))

结果:

['comp.graphics',

'comp.os.ms-windows.misc',

'comp.sys.ibm.pc.hardware',

'comp.sys.mac.hardware',

'comp.windows.x']

2. 提feature:

刚才load进来的newsgroup_train就是一篇篇document,我们要从中提取feature,即词频啊神马的,用fit_transform

#newsgroup_train.data is the original documents, but we need to extract the

#TF-IDF vectors inorder to model the text data

from sklearn.feature_extraction.text import TfidfVectorizer, HashingVectorizer

#vectorizer = TfidfVectorizer(sublinear_tf = True,

# max_df = 0.5,

# stop_words = 'english');

#however, Tf-Idf feather extractor makes the training set and testing set have

#divergent number of features. (Because they have different vocabulary in documents)

#So we use HashingVectorizer

vectorizer = HashingVectorizer(stop_words = 'english',non_negative = True,

n_features = 100)

fea_train = vectorizer.fit_transform(newsgroup_train.data)

#return feature vector 'fea_train' [n_samples,n_features]

print 'Size of fea_train:' + repr(fea_train.shape)

#11314 documents, 130107 vectors for all categories

print 'The average feature sparsity is {0:.3f}%'.format(

fea_train.nnz/float(fea_train.shape[0]*fea_train.shape[1])*100);

结果:

Size of fea_train:(2936, 100)

The average feature sparsity is 51.183%

因为我们只取了100个词,即100维feature,稀疏度还不算低。而实际上用TfidfVectorizer统计可得到上万维的feature,我统计的全部样本是13w多维,就是一个相当稀疏的矩阵了。

3. 分类

3.1 Multinomial Naive Bayes Classifier

见代码&comment,不解释

######################################################

#Multinomial Naive Bayes Classifier

print '*************************\nNaive Bayes\n*************************'

from sklearn.naive_bayes import MultinomialNB

from sklearn import metrics

newsgroups_test = fetch_20newsgroups(subset = 'test',

categories = categories);

fea_test = vectorizer.fit_transform(newsgroups_test.data);

#create the Multinomial Naive Bayesian Classifier

clf = MultinomialNB(alpha = 0.01)

clf.fit(fea_train,newsgroup_train.target);

pred = clf.predict(fea_test);

calculate_result(newsgroups_test.target,pred);

#notice here we can see that f1_score is not equal to 2*precision*recall/(precision+recall)

#because the m_precision and m_recall we get is averaged, however, metrics.f1_score() calculates

#weithed average, i.e., takes into the number of each class into consideration.

注意我最后的3行注释,为什么f1≠2*(准确率*召回率)/(准确率+召回率)

其中,函数calculate_result计算f1:

def calculate_result(actual,pred):

m_precision = metrics.precision_score(actual,pred);

m_recall = metrics.recall_score(actual,pred);

print 'predict info:'

print 'precision:{0:.3f}'.format(m_precision)

print 'recall:{0:0.3f}'.format(m_recall);

print 'f1-score:{0:.3f}'.format(metrics.f1_score(actual,pred));

3.2 KNN:

######################################################

#KNN Classifier

from sklearn.neighbors import KNeighborsClassifier

print '*************************\nKNN\n*************************'

knnclf = KNeighborsClassifier()#default with k=5

knnclf.fit(fea_train,newsgroup_train.target)

pred = knnclf.predict(fea_test);

calculate_result(newsgroups_test.target,pred);

3.3 SVM:

######################################################

#SVM Classifier

from sklearn.svm import SVC

print '*************************\nSVM\n*************************'

svclf = SVC(kernel = 'linear')#default with 'rbf'

svclf.fit(fea_train,newsgroup_train.target)

pred = svclf.predict(fea_test);

calculate_result(newsgroups_test.target,pred);

结果:

*************************

Naive Bayes

*************************

predict info:

precision:0.448

recall:0.448

f1-score:0.447

*************************

KNN

*************************

predict info:

precision:0.415

recall:0.405

f1-score:0.406

*************************

SVM

*************************

predict info:

precision:0.440

recall:0.438

f1-score:0.438

4. 聚类

######################################################

#KMeans Cluster

from sklearn.cluster import KMeans

print '*************************\nKMeans\n*************************'

pred = KMeans(n_clusters=5)

pred.fit(fea_test)

calculate_result(newsgroups_test.target,pred.labels_);

结果:

*************************

KMeans

*************************

predict info:

precision:0.177

recall:0.176

f1-score:0.171

本文全部代码下载: 在此

关于Python更多的学习资料将继续更新,敬请关注本博客和新浪微博 Rachel Zhang。

作者:abcjennifer 发表于2014-4-13 20:53:15 原文链接

阅读:306 评论:0 查看评论

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值