自然语言处理基础技术工具篇之NLTK

NLTK简介

  • NLTK被称为“使用Python进行计算语言学教学和工作的绝佳工具”。 它为50多种语料库和词汇资源(如WordNet)提供了易于使用的界面,还提供了一套用于分类,标记化,词干化,标记,解析和语义推理的文本处理库。接下来然我们一起来实战学习一波~~
  • Github地址:https://github.com/nltk/nltk
  • 官方文档:http://www.nltk.org/

NLTK

安装:pip install nltk

1.Tokenize
import nltk
sentence = 'I love natural language processing!'
tokens = nltk.word_tokenize(sentence)
print(tokens)
['I', 'love', 'natural', 'language', 'processing', '!']
2.词性标注
tagged = nltk.pos_tag(tokens)
print(tagged)
[('I', 'PRP'), ('love', 'VBP'), ('natural', 'JJ'), ('language', 'NN'), ('processing', 'NN'), ('!', '.')]
3.命名实体识别
  • 下载模型:nltk.download(‘maxent_ne_chunker’)
nltk.download('maxent_ne_chunker')
[nltk_data] Downloading package maxent_ne_chunker to
[nltk_data]     C:\Users\yuquanle\AppData\Roaming\nltk_data...
[nltk_data]   Unzipping chunkers\maxent_ne_chunker.zip.





True
nltk.download('words')
[nltk_data] Downloading package words to
[nltk_data]     C:\Users\yuquanle\AppData\Roaming\nltk_data...
[nltk_data]   Unzipping corpora\words.zip.





True
entities = nltk.chunk.ne_chunk(tagged)
print(entities)
(S I/PRP love/VBP natural/JJ language/NN processing/NN !/.)
4.下载语料库
  • 例如:下载brown
  • 更多语料库:http://www.nltk.org/howto/corpus.html
nltk.download('brown')
[nltk_data] Downloading package brown to
[nltk_data]     C:\Users\yuquanle\AppData\Roaming\nltk_data...
[nltk_data]   Package brown is already up-to-date!





True
from nltk.corpus import brown
brown.words()
['The', 'Fulton', 'County', 'Grand', 'Jury', 'said', ...]
5.度量
  • percision:正确率
  • recall:召回率
  • f_measure
from nltk.metrics import precision, recall, f_measure
reference = 'DET NN VB DET JJ NN NN IN DET NN'.split()
test    = 'DET VB VB DET NN NN NN IN DET NN'.split()
reference_set = set(reference)
test_set = set(test)
print("precision:" + str(precision(reference_set, test_set)))
print("recall:" + str(recall(reference_set, test_set)))
print("f_measure:" + str(f_measure(reference_set, test_set)))
precision:1.0
recall:0.8
f_measure:0.8888888888888888
6.词干提取(Stemmers)
  • Porter stemmer
from nltk.stem.porter import *
# 创建词干提取器
stemmer = PorterStemmer()
plurals = ['caresses', 'flies', 'dies', 'mules', 'denied']
singles = [stemmer.stem(plural) for plural in plurals]
print(' '.join(singles))
caress fli die mule deni
  • Snowball stemmer
from nltk.stem.snowball import SnowballStemmer
print(" ".join(SnowballStemmer.languages))
arabic danish dutch english finnish french german hungarian italian norwegian porter portuguese romanian russian spanish swedish
# 指定语言
stemmer = SnowballStemmer("english")
print(stemmer.stem("running"))
run
7.SentiWordNet接口
  • 下载sentiwordnet词典
import nltk
nltk.download('sentiwordnet')
[nltk_data] Downloading package sentiwordnet to
[nltk_data]     C:\Users\yuquanle\AppData\Roaming\nltk_data...
[nltk_data]   Unzipping corpora\sentiwordnet.zip.





True
  • SentiSynsets: synsets(同义词集)的情感值

from nltk.corpus import sentiwordnet as swn

breakdown = swn.senti_synset('breakdown.n.03')
print(breakdown)
print(breakdown.pos_score())
print(breakdown.neg_score())
print(breakdown.obj_score())
<breakdown.n.03: PosScore=0.0 NegScore=0.25>
0.0
0.25
0.75
  • Lookup(查看)
print(list(swn.senti_synsets('slow')))
[SentiSynset('decelerate.v.01'), SentiSynset('slow.v.02'), SentiSynset('slow.v.03'), SentiSynset('slow.a.01'), SentiSynset('slow.a.02'), SentiSynset('dense.s.04'), SentiSynset('slow.a.04'), SentiSynset('boring.s.01'), SentiSynset('dull.s.08'), SentiSynset('slowly.r.01'), SentiSynset('behind.r.03')]
happy = swn.senti_synsets('happy', 'a')
print(list(happy))
[SentiSynset('happy.a.01'), SentiSynset('felicitous.s.02'), SentiSynset('glad.s.02'), SentiSynset('happy.s.04')]

更多资料
  • 更多用法:http://www.nltk.org/howto/index.html
  • Github地址:https://github.com/nltk/nltk

欢迎关注【AI小白入门】,这里分享Python、机器学习、深度学习、自然语言处理、人工智能等技术,关注前沿技术,求职经验等,陪有梦想的你一起成长。
在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值