1. 分句Sentences Segment:使用nltk中的punkt句子分割器进行断句
加载:nltk.data.load('tokenizer/punkt/english.pickle')
import nltk
from nltk.tokenize import WordPunctTokenizer
import numpy as np
# 输入一个段落,分成句子(Punkt句子分割器)
paragraph = "Life is not easy for any of us. We must work,and above all \
we must believe in ourselves. We must believe that each one of us is \
able to do something well, and that, when we discover what this something is,\
we must work hard at it until we succeed."
#将所有大写字母转换为小写字母
paragraph = paragraph.lower()
#加载punkt句子分割器
sen_tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
#对句子进行分割
sentences = sen_tokenizer.tokenize(paragraph)
2. 分词word tokenize:使用nltk.word_tokenize(text)
3. 词频统计FreqDist
#nltk.FreqDist返回一个词典,key是不同的词,value是词出现的次数
freq_dist = nltk.FreqDist(words)
freq_list = []
num_words = len(freq_dist.values())
for i in range(num_words):
freq_list.append([list(freq_dist.keys())[i],list(freq_dist.values())[i]])
freqArr = np.array(freq_list)