使用ntlk完成英文单词及二元词组的词频统计

一、NLTK简介

  1. NLTK是什么
    NLTK是Natural Language Toolkit的缩写,是Python自然语言处理领域中最流行的一款工具包。它是一款免费的、开源的、由Python编写的自然语言处理工具包。NLTK包括了众多的库和数据集可以用来完成NLP的各种任务。
  1. NLTK的历史和现状
    NLTK最初由宾夕法尼亚大学的计算机科学系,由Steven Bird、Ewan Klein和Edward Loper三位教授和研究员共同开发。现在NLTK已经成为了NLP领域中使用最广泛的一款自然语言处理工具包。NLTK从2001年开始开发,到现在已经发布了5个版本,包含了大量的语言学研究和计算语言学的内容,同时还提供了相关数据、文本和语言模型等方面的支持。

二、NLTK环境配置

1.库安装

在终端可以使用pip指令快捷安装nltk

pip install nltk

2.引用

安装完成后,在使用前需要引入nltk模块

import ntlk

nltk在第一次运行时,需要下载nltk资源,在第一运行代码时,使用以下指令下载。注意:下载后nltk资源会存在本地,如果运行其他代码时也需要,若之前下载过则不需要再重复下载:

#示例,实际中可能会用到不同的资源包
# 确保下载所需的nltk资源
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
nltk.download('maxent_ne_chunker')

若nltk资源下载过则会显示如下内容,之后运行就可把下载资源的代码注释掉(可选):
在这里插入图片描述

三、 使用NLTK完成英文单词及二元词汇词频统计

1. 格式及目录

代码输入:text文档
在这里插入图片描述

输出为:excel文件
在这里插入图片描述
目录结构如下:
在这里插入图片描述

2.停用词的引入

代码可使用自定义的停用词(常用停用词我附在文章最后)

stop_words_path = "stopwords"

def read_stopwords_file(file_path):
    stop_words = []
    with open(file_path, 'r', encoding='utf-8') as file:
        for line in file.readlines():
            word = line.strip()  # 去除每行的换行符等空白字符
            stop_words.append(word)
    return stop_words

stop_words = read_stopwords_file(stop_words_path)

3. 全部代码

可实现对英文词汇及一些专有词组的词频统计

import nltk
from nltk.tokenize import word_tokenize
from nltk.util import bigrams
import openpyxl

file_name = "text"
file_path = "datas/" + file_name
stop_words_path = "stopwords"

def read_stopwords_file(file_path):
    stop_words = []
    with open(file_path, 'r', encoding='utf-8') as file:
        for line in file.readlines():
            word = line.strip()  # 去除每行的换行符等空白字符
            stop_words.append(word)
    return stop_words

stop_words = read_stopwords_file(stop_words_path)

# 读取文本文件内容
def read_text_file(file_path):
    with open(file_path, 'r', encoding='utf-8') as file:
        return file.read()

# 进行文本预处理(分词、转小写、去除标点、去除停用词等)
def preprocess_text(text):
    tokens = word_tokenize(text)
    tokens = [token.lower() for token in tokens if token.isalnum()]
    tokens = [token for token in tokens if token not in stop_words]
    return tokens

# 统计词频
def count_word_frequency(tokens):
    word_freq = {}
    for token in tokens:
        if token in word_freq:
            word_freq[token] += 1
        else:
            word_freq[token] = 1
    return word_freq

# 统计二元词组频率
def count_bigram_frequency(tokens):
    bigrams_list = list(bigrams(tokens))
    bigram_freq = {}
    for bg in bigrams_list:
        bg_str = " ".join(bg)
        if bg_str in bigram_freq:
            bigram_freq[bg_str] += 1
        else:
            bigram_freq[bg_str] = 1
    return bigram_freq


text = read_text_file(file_path)
processed_tokens = preprocess_text(text)

word_frequency = count_word_frequency(processed_tokens)
bigram_frequency = count_bigram_frequency(processed_tokens)

# 创建新的Excel工作簿
wb = openpyxl.Workbook()

# 激活默认的工作表
sheet = wb.active

# 在工作表中写入单词词频统计的表头
sheet['A1'] = "word"
sheet['B1'] = "count"

# 写入单词词频统计数据
row_index = 2
first_index = 0
for word, freq in sorted(word_frequency.items(), key=lambda x: x[1], reverse=True):
    sheet.cell(row=row_index, column=1, value=word)
    sheet.cell(row=row_index, column=2, value=freq)
    row_index += 1
first_index = row_index

# 写入二元词组词频统计数据
row_index = first_index
for bigram, freq in sorted(bigram_frequency.items(), key=lambda x: x[1], reverse=True):
    sheet.cell(row=row_index, column=1, value=bigram)
    sheet.cell(row=row_index, column=2, value=freq)
    row_index += 1

# 保存Excel文件
wb.save("output/" + file_name + "_count.xlsx")

四、附件

将内容复制到stopwords下即可

'd
'll
'm
're
's
't
've
ZT
ZZ
a
a’s
able
about
above
abst
accordance
according
accordingly
across
act
actually
added
adj
adopted
affected
affecting
affects
after
afterwards
again
against
ah
ain’t
all
allow
allows
almost
alone
along
already
also
although
always
am
among
amongst
an
and
announce
another
any
anybody
anyhow
anymore
anyone
anything
anyway
anyways
anywhere
apart
apparently
appear
appreciate
appropriate
approximately
are
area
areas
aren
aren’t
arent
arise
around
as
aside
ask
asked
asking
asks
associated
at
auth
available
away
awfully
b
back
backed
backing
backs
be
became
because
become
becomes
becoming
been
before
beforehand
began
begin
beginning
beginnings
begins
behind
being
beings
believe
below
beside
besides
best
better
between
beyond
big
biol
both
brief
briefly
but
by
c
c’mon
c’s
ca
came
can
can’t
cannot
cant
case
cases
cause
causes
certain
certainly
changes
clear
clearly
co
com
come
comes
concerning
consequently
consider
considering
contain
containing
contains
corresponding
could
couldn’t
couldnt
course
currently
d
date
definitely
describe
described
despite
did
didn’t
differ
different
differently
discuss
do
does
doesn’t
doing
don’t
done
down
downed
downing
downs
downwards
due
during
e
each
early
ed
edu
effect
eg
eight
eighty
either
else
elsewhere
end
ended
ending
ends
enough
entirely
especially
et
et-al
etc
even
evenly
ever
every
everybody
everyone
everything
everywhere
ex
exactly
example
except
f
face
faces
fact
facts
far
felt
few
ff
fifth
find
finds
first
five
fix
followed
following
follows
for
former
formerly
forth
found
four
from
full
fully
further
furthered
furthering
furthermore
furthers
g
gave
general
generally
get
gets
getting
give
given
gives
giving
go
goes
going
gone
good
goods
got
gotten
great
greater
greatest
greetings
group
grouped
grouping
groups
h
had
hadn’t
happens
hardly
has
hasn’t
have
haven’t
having
he
he’s
hed
hello
help
hence
her
here
here’s
hereafter
hereby
herein
heres
hereupon
hers
herself
hes
hi
hid
high
higher
highest
him
himself
his
hither
home
hopefully
how
howbeit
however
hundred
i
i’d
i’ll
i’m
i’ve
id
ie
if
ignored
im
immediate
immediately
importance
important
in
inasmuch
inc
include
indeed
index
indicate
indicated
indicates
information
inner
insofar
instead
interest
interested
interesting
interests
into
invention
inward
is
isn’t
it
it’d
it’ll
it’s
itd
its
itself
j
just
k
keep
keeps
kept
keys
kg
kind
km
knew
know
known
knows
l
large
largely
last
lately
later
latest
latter
latterly
least
less
lest
let
let’s
lets
like
liked
likely
line
little
long
longer
longest
look
looking
looks
ltd
m
made
mainly
make
makes
making
man
many
may
maybe
me
mean
means
meantime
meanwhile
member
members
men
merely
mg
might
million
miss
ml
more
moreover
most
mostly
mr
mrs
much
mug
must
my
myself
n
n’t
na
name
namely
nay
nd
near
nearly
necessarily
necessary
need
needed
needing
needs
neither
never
nevertheless
new
newer
newest
next
nine
ninety
no
nobody
non
none
nonetheless
noone
nor
normally
nos
not
noted
nothing
novel
now
nowhere
number
numbers
o
obtain
obtained
obviously
of
off
often
oh
ok
okay
old
older
oldest
omitted
on
once
one
ones
only
onto
open
opened
opening
opens
or
ord
order
ordered
ordering
orders
other
others
otherwise
ought
our
ours
ourselves
out
outside
over
overall
owing
own
p
page
pages
part
parted
particular
particularly
parting
parts
past
per
perhaps
place
placed
places
please
plus
point
pointed
pointing
points
poorly
possible
possibly
potentially
pp
predominantly
present
presented
presenting
presents
presumably
previously
primarily
probably
problem
problems
promptly
proud
provides
put
puts
q
que
quickly
quite
qv
r
ran
rather
rd
re
readily
really
reasonably
recent
recently
ref
refs
regarding
regardless
regards
related
relatively
research
respectively
resulted
resulting
results
right
room
rooms
run
s
said
same
saw
say
saying
says
sec
second
secondly
seconds
section
see
seeing
seem
seemed
seeming
seems
seen
sees
self
selves
sensible
sent
serious
seriously
seven
several
shall
she
she’ll
shed
shes
should
shouldn’t
show
showed
showing
shown
showns
shows
side
sides
significant
significantly
similar
similarly
since
six
slightly
small
smaller
smallest
so
some
somebody
somehow
someone
somethan
something
sometime
sometimes
somewhat
somewhere
soon
sorry
specifically
specified
specify
specifying
state
states
still
stop
strongly
sub
substantially
successfully
such
sufficiently
suggest
sup
sure
t
t’s
take
taken
taking
tell
tends
th
than
thank
thanks
thanx
that
that’ll
that’s
that’ve
thats
the
their
theirs
them
themselves
then
thence
there
there’ll
there’s
there’ve
thereafter
thereby
thered
therefore
therein
thereof
therere
theres
thereto
thereupon
these
they
they’d
they’ll
they’re
they’ve
theyd
theyre
thing
things
think
thinks
third
this
thorough
thoroughly
those
thou
though
thoughh
thought
thoughts
thousand
three
throug
through
throughout
thru
thus
til
tip
to
today
together
too
took
toward
towards
tried
tries
truly
try
trying
ts
turn
turned
turning
turns
twice
two
u
un
under
unfortunately
unless
unlike
unlikely
until
unto
up
upon
ups
us
use
used
useful
usefully
usefulness
uses
using
usually
uucp
v
value
various
very
via
viz
vol
vols
vs
w
want
wanted
wanting
wants
was
wasn’t
way
ways
we
we’d
we’ll
we’re
we’ve
wed
welcome
well
wells
went
were
weren’t
what
what’ll
what’s
whatever
whats
when
whence
whenever
where
where’s
whereafter
whereas
whereby
wherein
wheres
whereupon
wherever
whether
which
while
whim
whither
who
who’ll
who’s
whod
whoever
whole
whom
whomever
whos
whose
why
widely
will
willing
wish
with
within
without
won’t
wonder
words
work
worked
working
works
world
would
wouldn’t
www
x
y
year
years
yes
yet
you
you’d
you’ll
you’re
you’ve
youd
young
younger
youngest
your
youre
yours
yourself
yourselves
z
zero
zt
zz

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值