一、环境
windows7 64位
python3.5
二、安装jieba和gensim
打开windows的powershell,定位到anaconda的scripts文件夹,输入
pip install jieba
pip install gensim
三、数据文件格式
原始数据格式
使用jieba分词将文本内容分词和去除停用词
# 分词函数
def seg_sentence(sentence):
sentence_seged = jieba.cut(sentence.strip())
stopwords = [line.strip() for line in open('停用词表.txt', 'r', encoding='gbk')
outstr = ''
for word in sentence_seged:
if word not in stopwords:
if word != '\t':
outstr += word
outstr += " "
return outstr
#对摘要进行分词
data_23_2 = pd.DataFrame(columns=['id','tag', 'words'])
for line in data_23.index:
seg = data_23['summary'][line].replace('\\\\n', ' ').replace('\\t', ' ')
id = data_23['id'][line]
tag = 23
line_seg = seg_sentence(seg)
l = line_seg.split(' ')
result=[]
for i in l:
result.append(i)
data_23_2.loc[line] = [id, tag, set(result)]
此时得到的数据格式为
将words保存为txt
fl = open('data3.txt', 'w', encoding='utf8')
for i in data3.index:
a = list()
print(i)
for j in data3['words'][i].strip('{}').split(','):
a.append(j)
fl.write(str(a))
fl.write('\n')
fl.close()
打开txt编辑替换将[],',,替换成空,最终得到的data3.txt文件
四、训练模型
min_count为最少出现次数,size为隐层参数数量
sentences=word2vec.Text8Corpus(u'data3.txt')
model=word2vec.Word2Vec(sentences,min_count=1,size=50)
保存模型
model.save('F:\工作相关\jupyter\数据分析\model\word2vec_model')