文档0文档1文档[[ 2.]]
文档0文档2文档[[ 2.44948974]]
文档1文档2文档[[ 2.44948974]]
## Stop-word filtering 停用词过滤
CountVectorizer类可以通过设置stop_words参数过滤停用词,默认是英语常用的停用词。
from sklearn.feature_extraction.text importCountVectorizer
corpus=['UNC played Duke in basketball','Duke lost the basketball game','I ate a sandwich']
vectorizer=CountVectorizer(stop_words='english')printvectorizer.fit_transform(corpus).todense()print vectorizer.vocabulary_
输出结果:
[[0 1 1 0 0 1 0 1]
[0 1 1 1 1 0 0 0]
[1 0 0 0 0 0 1 0]]
{u'duke': 2, u'basketball': 1, u'lost': 4, u'played': 5, u'game': 3, u'sandwich': 6, u'unc': 7, u'ate': 0}
# Stemming and lemmatization 词根还原和词形还原
from sklearn.feature_extraction.text importCountVectorizer
corpus= ['He ate the sandwiches','Every sandwich was eaten by him']
vectorizer=CountVectorizer(binary=True,stop_words='english')printvectorizer.fit_transform(corpus).todense()print vectorizer.vocabulary_
输出结果:
[[1 0 0 1]
[0 1 1 0]]
{u'sandwich': 2, u'ate': 0, u'sandwiches': 3, u'eaten': 1}
### 让我们分析一下单词gathering的词形还原:
corpus =['I am gathering ingredients for the sandwich.','There were many wizards at the gathering.']importnltk
nltk.download()from nltk.stem.wordnet importWordNetLemmatizerfrom nltk importword_tokenizefrom nltk.stem importPorterStemmerfrom nltk.stem.wordnet importWordNetLemmatizerfrom nltk importpos_tag
wordnet_tags= ['n', 'v']
corpus=['He ate the sandwiches','Every sandwich was eaten by him']
stemmer=PorterStemmer()print('Stemmed:', [[stemmer.stem(token) for token in word_tokenize(document)] for document in corpus])
输出结果:
('Stemmed:', [[u'He', u'ate', u'the', u'sandwich'], [u'Everi', u'sandwich', u'wa', u'eaten', u'by', u'him']])
deflemmatize(token, tag):if tag[0].lower() in ['n', 'v']:returnlemmatizer.lemmatize(token, tag[0].lower())returntoken
lemmatizer=WordNetLemmatizer()
tagged_corpus= [pos_tag(word_tokenize(document)) for document incorpus]print('Lemmatized:', [[lemmatize(token, tag) for token, tag in document] for document in tagged_corpus])
输出结果:
('Lemmatized:', [['He', u'eat', 'the', u'sandwich'], ['Every', 'sandwich', u'be', u'eat', 'by', 'him']])
## 带TF-IDF权重的扩展词库
from sklearn.feature_extraction.text importCountVectorizer
corpus=['The dog ate a sandwich, the wizard transfigured a sandwich, and I ate a sandwich']
vectorizer=CountVectorizer(stop_words='english')printvectorizer.fit_transform(corpus).todense()print vectorizer.vocabulary_
输出结果:
[[2 1 3 1 1]]
{u'sandwich': 2, u'wizard': 4, u'dog': 1, u'transfigured': 3, u'ate': 0}
#tf-idf
from sklearn.feature_extraction.text importTfidfVectorizer
corpus= ['The dog ate a sandwich and I ate a sandwich','The wizard transfigured a sandwich']
vectorizer=TfidfVectorizer(stop_words='english')printvectorizer.fit_transform(corpus).todense()print vectorizer.vocabulary_
输出结果:
[[ 0.75458397 0.37729199 0.53689271 0. 0. ]
[ 0. 0. 0.44943642 0.6316672 0.6316672 ]]
{u'sandwich': 2, u'wizard': 4, u'dog': 1, u'transfigured': 3, u'ate': 0}
## 通过哈希技巧实现特征向量
from sklearn.feature_extraction.text importHashingVectorizer
corpus= ['the', 'ate', 'bacon', 'cat']
vectorizer= HashingVectorizer(n_features=6)print(vectorizer.transform(corpus).todense())
输出结果:
[[-1. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 1. 0. 0.]
[ 0. 0. 0. 0. -1. 0.]
[ 0. 1. 0. 0. 0. 0.]]
设置成6是为了演示。另外,注意有些单词频率是负数。由于Hash碰撞可能发生,所以HashingVectorizer用有符号哈希函数(signed hash function)。特征值和它的词块的哈希值带
同样符号,如果cats出现过两次,被哈希成-3,文档特征向量的第四个元素要减去2。如果dogs出现过两次,被哈希成3,文档特征向量的第四个元素要加上2。
## 图片特征提取
#通过像素值提取特征
scikit-learn的digits数字集包括至少1700种0-9的手写数字图像。每个图像都有8x8像像素构成。每
个像素的值是0-16,白色是0,黑色是16。如下图所示:
%matplotlib inlinefrom sklearn importdatasetsimportmatplotlib.pyplot as plt
digits=datasets.load_digits()print 'Digit:',digits.target[0]printdigits.images[0]
plt.imshow(digits.images[0], cmap=plt.cm.gray_r, interpolation='nearest')
plt.show()
输出结果:
Digit: 0
[[ 0. 0. 5. 13. 9. 1. 0. 0.]
[ 0. 0. 13. 15. 10. 15. 5. 0.]
[ 0. 3. 15. 2. 0. 11. 8. 0.]
[ 0. 4. 12. 0. 0. 8. 8. 0.]
[ 0. 5. 8. 0. 0. 9. 8. 0.]
[ 0. 4. 11. 0. 1. 12. 7. 0.]
[ 0. 2. 14. 5. 10. 12. 0. 0.]
[ 0. 0. 6. 13. 10. 0. 0. 0.]]
digits=datasets.load_digits()print('Feature vector:\n',digits.images[0].reshape(-1,64))
输出结果:
('Feature vector:\n', array([[ 0., 0., 5., 13., 9., 1., 0., 0., 0., 0., 13.,
15., 10., 15., 5., 0., 0., 3., 15., 2., 0., 11.,
8., 0., 0., 4., 12., 0., 0., 8., 8., 0., 0.,
5., 8., 0., 0., 9., 8., 0., 0., 4., 11., 0.,
1., 12., 7., 0., 0., 2., 14., 5., 10., 12., 0.,
0., 0., 0., 6., 13., 10., 0., 0., 0.]]))
%matplotlib inlineimportnumpy as npfrom skimage.feature importcorner_harris,corner_peaksfrom skimage.color importrgb2grayimportmatplotlib.pyplot as pltimportskimage.io as iofrom skimage.exposure importequalize_histdefshow_corners(corners,image):
fig=plt.figure()
plt.gray()
plt.imshow(image)
y_corner,x_corner=zip(*corners)
plt.plot(x_corner,y_corner,'or')
plt.xlim(0,image.shape[1])
plt.ylim(image.shape[0],0)
fig.set_size_inches(np.array(fig.get_size_inches())*1.5)
plt.show()