python常用单词自由且开放,用python从语料库中提取最常用的单词

Maybe this is a stupid question, but I have a problem with extracting the ten most frequent words out of a corpus with Python. This is what I've got so far. (btw, I work with NLTK for reading a corpus with two subcategories with each 10 .txt files)

import re

import string

from nltk.corpus import stopwords

stoplist = stopwords.words('dutch')

from collections import defaultdict

from operator import itemgetter

def toptenwords(mycorpus):

words = mycorpus.words()

no_capitals = set([word.lower() for word in words])

filtered = [word for word in no_capitals if word not in stoplist]

no_punct = [s.translate(None, string.punctuation) for s in filtered]

wordcounter = {}

for word in no_punct:

if word in wordcounter:

wordcounter[word] += 1

else:

wordcounter[word] = 1

sorting = sorted(wordcounter.iteritems(), key = itemgetter, reverse = True)

return sorting

If I print this function with my corpus, it gives me a list of all words with '1' behind it. It gives me a dictionary but all my values are one. And I know that for example the word 'baby' is five or six times in my corpus... And still it gives 'baby: 1'... So it doesn't function the way I want...

Can someone help me?

解决方案

If you're using the NLTK anyway, try the FreqDist(samples) function to first generate a frequency distribution from the given sample. Then call the most_common(n) attribute to find the n most common words in the sample, sorted by descending frequency. Something like:

from nltk.probability import FreqDist

fdist = FreqDist(stoplist)

top_ten = fdist.most_common(10)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值