In [1]: from sklearn.feature_extraction.text import TfidfVectorizer
In [2]: cv = TfidfVectorizer(binary=False, decode_error='ignore', stop_words='english')
In [3]: vec = cv.fit_transform(['hello my friend', 'i love cooking and singing', 'i am studying machine learning'])
In [4]: res = vec.toarray()
In [6]: res
Out[6]:
array([[0. , 0.70710678, 0.70710678, 0. , 0. ,0. , 0. , 0. ],
[0.57735027, 0. , 0. , 0. , 0.57735027,0. , 0.57735027, 0. ],
[0. , 0. , 0. , 0.57735027, 0. ,0.57735027, 0. , 0.57735027]])
由此可见,结果中,每一行代表一句话i,每一列代表一个单词j。元素aij就代表了单词j在语句i中的tf-idf值。有点类似one-hot encoding。但是矩阵的值不像one-hot编码,只能表示单词j是否在语句i中出现,或单词j在语句i中出现的次数,而是tfidf数值,包含的信息更多。