是的,第6章的教程旨在为学生提供基础知识,从那里开始,学生应通过探索NLTK中可用的内容和不可用的内容来建立基础。因此,让我们一次解决一个问题。
首先,通过目录获取“ pos” /“ neg”文档的方法很可能是正确的做法,因为语料库是按照这种方式组织的。
from nltk.corpus import movie_reviews as mr
from collections import defaultdict
documents = defaultdict(list)
for i in mr.fileids():
documents[i.split('/')[0]].append(i)
print documents['pos'][:10] # first ten pos reviews.
print documents['neg'][:10] # first ten neg reviews.
[出]:
['pos/cv000_29590.txt', 'pos/cv001_18431.txt', 'pos/cv002_15918.txt', 'pos/cv003_11664.txt', 'pos/cv004_11636.txt', 'pos/cv005_29443.txt', 'pos/cv006_15448.txt', 'pos/cv007_4968.txt', 'pos/cv008_29435.txt', 'pos/cv009_29592.txt']
['neg/cv000_29416.txt', 'neg/cv001_19502.txt', 'neg/cv002_17424.txt', 'neg/cv003_12683.txt', 'neg/cv004_12641.txt', 'neg/cv005_29357.txt', 'neg/cv006_17022.txt', 'neg/cv007_4992.txt', 'neg/cv008_29326.txt', 'neg/cv009_29417.txt']
另外,我喜欢一个元组列表,其中第一个为element是.txt文件中的单词列表,第二个为category。并且同时删除停用词和标点符号:
from nltk.corpus import movie_reviews as mr
import string
from nltk.corpus import stopwords
stop = stopwords.words('english')
documents = [([w for w in mr.words(i) if w.lower() not in stop and w.lower() not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]
接下来是的错误FreqDist(for w in movie_reviews.words() ...)。您的代码没有错,只是您应该尝试使用名称空间(请参阅http://en.wikipedia.org/wiki/Namespace#Use_in_common_languages)。如下代码:
from nltk.corpus import movie_reviews as mr
from nltk.probability import FreqDist
from nltk.corpus import stopwords
import string
stop = stopwords.words('english')
all_words = FreqDist(w.lower() for w in mr.words() if w.lower() not in stop and w.lower() not in string.punctuation)
print all_words
[输出]:
由于上面的代码可以FreqDist正确打印,因此错误提示您nltk_data/目录中没有文件。
您所得到的事实fic/11.txt表明您正在使用NLTK或NLTK语料库的某些旧版本。通常,fileidsin movie_reviews以pos/ 开头,neg然后是斜杠,然后是文件名,最后.txt是例如pos/cv001_18431.txt。
所以我认为,也许您应该使用以下方法重新下载文件:
$ python
>>> import nltk
>>> nltk.download()
然后,确保在“语料库”选项卡下正确下载了电影评论语料库:
MR dl
回到代码,如果您已经在文档中过滤了所有单词,那么遍历电影评论语料库中的所有单词似乎是多余的,因此我宁愿这样做以提取所有功能集:
word_features = FreqDist(chain(*[i for i,j in documents]))
word_features = word_features.keys()[:100]
featuresets = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents]
接下来,按功能划分训练/测试是可以的,但是我认为最好使用文档,所以代替此:
featuresets = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents]
train_set, test_set = featuresets[100:], featuresets[:100]
我建议改为:
numtrain = int(len(documents) * 90 / 100)
train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[:numtrain]]
test_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[numtrain:]]
然后将数据输入分类器,瞧!因此,这是没有注释和演练的代码:
import string
from itertools import chain
from nltk.corpus import movie_reviews as mr
from nltk.corpus import stopwords
from nltk.probability import FreqDist
from nltk.classify import NaiveBayesClassifier as nbc
import nltk
stop = stopwords.words('english')
documents = [([w for w in mr.words(i) if w.lower() not in stop and w.lower() not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]
word_features = FreqDist(chain(*[i for i,j in documents]))
word_features = word_features.keys()[:100]
numtrain = int(len(documents) * 90 / 100)
train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[:numtrain]]
test_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[numtrain:]]
classifier = nbc.train(train_set)
print nltk.classify.accuracy(classifier, test_set)
classifier.show_most_informative_features(5)
[出]:
0.655
Most Informative Features
bad = True neg : pos = 2.0 : 1.0
script = True neg : pos = 1.5 : 1.0
world = True pos : neg = 1.5 : 1.0
nothing = True neg : pos = 1.5 : 1.0
bad = False pos : neg = 1.5 : 1.0