>- **🍨 本文为[🔗365天深度学习训练营](https://mp.weixin.qq.com/s/AtyZUu_j2k_ScNH6e732ow) 中的学习记录博客**
>- **🍖 原作者:[K同学啊 | 接辅导、项目定制](https://mtyjkh.blog.csdn.net/)**
>- **🚀 文章来源:[K同学的学习圈子](https://www.yuque.com/mingtian-fkmxf/zxwb45)**
本次分享的是如何使用Gensim库训练Word2Vec模型
按照库:
!pip install gensim
对语料进行分类:
import jieba
import jieba.analyse
jieba.suggest_freq('沙瑞金', True)
jieba.suggest_freq('田国富', True)
jieba.suggest_freq('高育良', True)
jieba.suggest_freq('侯亮平', True)
jieba.suggest_freq('钟小艾', True)
jieba.suggest_freq('陈岩石', True)
jieba.suggest_freq('欧阳菁', True)
jieba.suggest_freq('易学习', True)
jieba.suggest_freq('王大路', True)
jieba.suggest_freq('蔡成功', True)
jieba.suggest_freq('孙连城', True)
jieba.suggest_freq('季昌明', True)
jieba.suggest_freq('丁义珍', True)
jieba.suggest_freq('郑西坡', True)
jieba.suggest_freq('赵东来', True)
jieba.suggest_freq('高小琴', True)
jieba.suggest_freq('赵瑞龙', True)
jieba.suggest_freq('林华华', True)
jieba.suggest_freq('陆亦可', True)
jieba.suggest_freq('刘新建', True)
jieba.suggest_freq('刘庆祝', True)
jieba.suggest_freq('赵德汉', True)
with open('in_the_name_of_people.txt',encoding='utf-8') as f:
result_cut = []
lines = f.readlines()
for line in lines:
result_cut.append(list(jieba.cut(line)))
f.close()
使用 人民的名义 原文作为数据 并加入一些分词使jieba分词效率更高
添加停用词:
stopwords_list = [",","。","\n","\u3000"," ",":","!","?","…"]
def remove_stopwords(ls):
return [word for word in ls if word not in stopwords_list]
result_stop=[remove_stopwords(x) for x in result_cut if remove_stopwords(x)]
训练模型:
from gensim.models import Word2Vec
model = Word2Vec(result_stop,
vector_size=100,
window=5,
min_count=1)
计算两个词的相似度:
print(model.wv.similarity('沙瑞金', '季昌明'))
print(model.wv.similarity('沙瑞金', '田国富'))
找n个最相近的词:
for e in model.wv.most_similar(positive=['金'], topn=5):
print(e[0], e[1])
找不同:
odd_word = model.wv.doesnt_match(["苹果", "香蕉", "橙子", "鸡"])
print(f"Words that do not match in this set: {odd_word}")
词频统计:
word_frequency = model.wv.get_vecattr("沙瑞金", "count")
print(f"沙瑞金:{word_frequency}")