python的scikit-learn包下有计算tf-idf的api,研究了下做个笔记
1 安装scikit-learn包
sudo pip install scikit-learn
2 中文分词採用的jieba分词,安装jieba分词包
sudo pip install jieba
3 关于jieba分词的使用很easy,參考这里,关键的语句就是(这里简单试水,不追求效果4 )import jieba.posseg as pseg
words=pseg.cut("对这句话进行分词")
for key in words:
print key.word,key.flag输出结果:
对 p
这 r
句 q
话 n
进行 v
分词 n
4 採用scikit-learn包进行tf-idf分词权重计算关键用到了两个类:CountVectorizer和TfidfTransformer,详细參见这里
一个简单的代码例如以下:
# coding:utf-8
__author__ = "liuxuejiang"
import jieba
import jieba.posseg as pseg
import os
import sys
from sklearn import feature_extraction
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import