python 分词

#encoding=utf-8
import jieba

seg_list = jieba.cut("明天不上班啊",cut_all=True)
print ("Full Mode:", "/ ".join(seg_list))

seg_list = jieba.cut("明天不上班啊",cut_all=False)
print ("Default Mode:", "/ ".join(seg_list))

seg_list = jieba.cut("明天不上班啊")
print (", ".join(seg_list))

 

打印结果:

F:\python-study\fenci>python test.py
Building prefix dict from C:\Python33\lib\site-packages\jieba\dict.txt ...
Loading model from cache c:\users\zhaoji~1\appdata\local\temp\jieba.cache
Loading model cost 0.840 seconds.
Prefix dict has been built succesfully.
Full Mode: 明天/ 不/ 上班/ 啊
Default Mode: 明天/ 不/ 上班/ 啊
明天, 不, 上班, 啊

 

 

python分词工具:jieba

 

 

1、运行后错误:

F:\python-study\fenci>python test.py
File "test.py", line 3
SyntaxError: Non-UTF-8 code starting with '\xce' in file test.py on line 3, but
no encoding declared; see http://python.org/dev/peps/pep-0263/ for details

 

查询资料,发现是编辑的编码问题,notepad打开,下边显示ansi,需要设置 转换为utf-8即可

 

2、python 3的print需要增加括号

print()

 

测试:

#coding=utf-8
import jieba
import jieba.posseg as pseg

f=open("in.txt","r") #读取文本
string=f.read()

words = pseg.cut(string) #进行分词
result=""  
for w in words:
     result+= str(w.word)+"/"+str(w.flag) #加词性标注

f=open("out.txt","w") 
f.write(result)
f.close()

 

转载于:https://www.cnblogs.com/huanhuanang/p/4750343.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值