本文内容转自:https://ailearning.apachecn.org/#/docs/nlp/
一、分词--搜索功能--ChineseAnalyzer for Whoosh搜索引擎
ChineseAnalyzer for Whoosh搜索引擎
pip install whoosh
Whoosh是一个用来索引文本并能够根据索引搜索的包含类和方法的类库,它允许你开发一个针对自己内容的搜索引擎
例如,如果你想创建一个博客软件,你可以使用Whoosh添加一个允许用户搜索博客类目的搜索功能
代码案例:
# -*- coding: UTF-8 -*-
from __future__ import unicode_literals
import sys,os
sys.path.append("../")
from whoosh.index import create_in,open_dir
from whoosh.fields import *
from whoosh.qparser import QueryParser
from jieba.analyse.analyzer import ChineseAnalyzer
analyzer = ChineseAnalyzer()
schema = Schema(title=TEXT(stored=True), path=ID(stored=True), content=TEXT(stored=True, analyzer=analyzer))
if not os.path.exists("tmp"):
os.mkdir("tmp")
ix = create_in("tmp", schema) # for create new index
#ix = open_dir("tmp") # for read only
writer = ix.writer()
writer.add_document(
title="document1",
path="/a",
content="This is the first document we’ve added!"
)
writer.add_document(
title="document2",
path="/b",
content="The second one 你 中文测试中文 is even more interesting! 吃水果"
)
writer.add_document(
title="document3",
path="/c",
content="买水果然后来世博园。"
)
writer.add_document(
title="document4",
path="/c",
content="工信处女干事每月经过下属科室都要亲口交代24口交换机等技术性器件的安装工作"
)
writer.add_document(
title="document4",
path="/c",
content="咱俩交换一下吧。"
)
writer.commit()
searcher = ix.searcher()
parser = QueryParser("content", schema=ix.schema)
for keyword in ("水果世博园","你","first","中文","交换机","交换"):
print("result of ",keyword)
q = parser.parse(keyword)
results = searcher.search(q)
for hit in results:
print(hit.highlights("content"))
print("="*10)
for t in analyzer("我的好朋友是李明;我爱北京天安门;IBM和Microsoft; I have a dream. this is intetesting and interested me a lot"):
print(t.text)
参考资料:点此链接
二、TextRank
1、基本思想
- 将待抽取关键词的文本进行分词
- 以固定窗口大小(默认为5,通过span属性调整),词之间的共现关系,构建图
- 计算图中节点的PageRank,注意是无向带权图
代码案例:
import jieba
import jieba.analyse as analyse
sentence='传香港将把二次上市及生物科技企业纳入港股通'
# 1、直接使用,接口相同,注意默认过滤词性,'ns', 'n', 'vn', 'v'==》地名、名词、动名词、动词
result_1=analyse.textrank(sentence,topK=20,withWeight=True,allowPOS=('ns', 'n', 'vn', 'v'))
# 2、新建自定义 TextRank实例
# analyse.TextRank()
# 3、测试
for x,w in result_1:
print(x,w)
三、Tokenize返回词语的起止位置
注意,输入参数只接受 unicode
# encoding=utf-8
import jieba
result = jieba.tokenize(u'永和服装饰品有限公司')
for tk in result:
print("word {}\t\t start: {} \t\t end:{}" .format(tk[0],tk[1],tk[2]))
"""
word 永和 start: 0 end:2
word 服装 start: 2 end:4
word 饰品 start: 4 end:6
word 有限公司 start: 6 end:10
"""
result = jieba.tokenize(u'永和服装饰品有限公司', mode='search')
for tk in result:
print("word {}\t\t start: {} \t\t end:{}" .format(tk[0],tk[1],tk[2]))
"""
word 永和 start: 0 end:2
word 服装 start: 2 end:4
word 饰品 start: 4 end:6
word 有限 start: 6 end:8
word 公司 start: 8 end:10
word 有限公司 start: 6 end:10
"""