什么是TF-IDF
TF-IDF(term frequency-inverse document frequency)词频-逆向文件频率。在处理文本时,如何将文字转化为模型可以处理的向量呢?TF-IDF就是这个问题的解决方案之一。字词的重要性与其在文本中出现的频率成正比(TF),与其在语料库中出现的频率成反比(IDF)。
TF
TF:词频。TF(w)=(词w在文档中出现的次数)/(文档的总词数)
IDF
IDF:逆向文件频率。有些词可能在文本中频繁出现,但并不重要,也即信息量小,如is,of,that这些单词,这些单词在语料库中出现的频率也非常大,我们就可以利用这点,降低其权重。IDF(w)=log_e(语料库的总文档数)/(语料库中词w出现的文档数)
TF-IDF
将上面的TF-IDF相乘就得到了综合参数:TF-IDF=TF*IDF
数值类型
corpus = [
'This is the first document.',
'This is the second second document.',
'And the third one.',
'Is this the first document?',
]
x1 = ['1,2,2','4,5,6']
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import CountVectorizer
tfidf2 = TfidfVectorizer(analyzer='char') #analyzer='char',stop_words=[1,'2']
print(tfidf2)
re = tfidf2.fit_transform(x1)
word = tfidf2.get_feature_names()
print(word)
print(re.toarray())
如果想针对单字进行tfidf计算,可以加上参数vectorizer = CountVectorizer(analyzer=‘char’),此时,输入字符串无需做空格分隔, CountVectorizer会自动按照 单字 进行分隔统计词频。
要想同时支持 字+词 的tfidf计算,需自定义token正则:
vectorizer = CountVectorizer(analyzer='word',token_pattern=u"(?u)\\b\\w+\\b")
竞赛输入格式处理
x = {'a':[1,1,1,1,2,2,2,2],'b':[4,5,6,7,8,9,10,11]}
x = pd.DataFrame(x)
x['b'] = x['b'].astype('str')
x
def get_tfidf(word):
str = ","
return str.join(word)
x1 = x.groupby('a')['b'].apply(list).reset_index()
x2 = dict(zip(x1['a'], x1['b']))
x1['c'] = x1['b'].apply(get_tfidf)
x1
t1 = list(x1['c'])
print(t1)
tfidf2 = TfidfVectorizer(analyzer='word',token_pattern=u"(?u)\\b\\w+\\b")
print(tfidf2)
re = tfidf2.fit_transform(t1)
word = tfidf2.get_feature_names()
print(word)
print(re.toarray())