组块分析(Chunking)
组块分析是从非结构化文本中提取短语的过程。相对于POS-Tagging来说,POS-Tagging返回了解析树的最底层,就是一个个单词。但是有时候你需要的是几个单词构成的名词短语,而非个个单词,在这种情况下,您可以使用chunker获取您需要的信息,而不是浪费时间为句子生成完整的解析树。举个例子(中文):与其要单个字,不如要一个词,例如,将“南非”之类的短语作为一个单独的词,而不是分别拆成“南”和“非”去理解。
组块分析是可以接着词性标注工作继续完成,它使用词性标注作为输入,并提供分析好的组块做为输出。与词性标注的标签类似,它也有一组标准的组块标签,如名词短语(np)、动词短语(vp)等,当你想从诸如位置,人名等文本中提取信息时,分块是非常重要的。在NLP中,称为命名实体识别,举个例子‘李雷的杯子’是分块分出的一个短语,而抽取’李雷’这个人名,就是命名体识别。所以,组块分析也是命名体识别的基础。
————————————————
版权声明:本文为CSDN博主「追风箭0211」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/Sirow/article/details/89306934
NLTK 工具
#对句子进行词汇分割和正规化,有些情况如aren‘t需要分割为are和n’t;或者i‘m要分割为i和’m。
#tokens_1=nltk.word_tokenize('what your')
#print(tokens_1)
import nltk
sent="Sirhan , in his first television interview , called Sen. Robert F. Kennedy his hero , but said he killed the presidential candidate more than 20 years ago because he felt betrayed by Kennedy 's support for Israel"
text = nltk.word_tokenize(sent)
sentence=nltk.pos_tag(text)
#grammar = "NP:{<JJ|NN|NNS.*><POS|IN.*><NN|NNS.*>}"
grammar = r"""
NP:{<JJ|NN><POS|IN>?<NN>+}
PP:{<NN|NNS|NNP|NNPS>}
"""
cp = nltk.RegexpParser(grammar) #生成规则
result = cp.parse(sentence) #进行分块
print(result)
substring=[]
finalstring=''
for subtree in result.subtrees():
if ((subtree.label() == 'NP')|(subtree.label()=='PP')):
substring.append(subtree)
for each in substring:
length=len(each)
#for i in (0,length-1):
#print(each[i])
for i in range(0,length):
finalstring+=each[i][0]+' '
finalstring+=', '
output=finalstring.split(', ')
out=[i for i in output if len(i.split(' '))>=3]
print(output)
print(out)
sent_split=sent.split(' ')
for i in out:
ind=sent_split.index(i.split(' ')[-2])
sent_split.insert(ind+1,'( '+i+')')
sent_res=' '.join(i for i in sent_split)
print(sent_res)
spacy
import spacy
# 必须导入pytextrank,虽然表面上没用上,
import pytextrank
# example text
text = "Sirhan Sirhan , in his first television interview , called Sen. Robert F. Kennedy his hero , but said he killed the presidential candidate more than 20 years ago because he felt betrayed by Kennedy 's support for Israel ."
# 加载模型和依赖
nlp = spacy.load("en_core_web_sm")
# 此处调用“PyTextRank”包
nlp.add_pipe("textrank")
doc = nlp(text)
# 读出短语、词频和权重
for phrase in doc._.phrases:
# 短语
print(phrase.text)
# 权重、词频
print(phrase.rank, phrase.count)
'''# 短语的列表
print(phrase.chunks)'''