第N5周:调用Gensim库训练Word2Vec模型

1.安装Gensim库

pip install --upgrade gensim

2.对原始语料分词


import jieba
import jieba.analyse
jieba.suggest_freq('沙瑞金',True)#加入一些词,使得jieba分词准确率更高
jieba.suggest_freq('田国富',True)
jieba.suggest_freq('高育良',True)
jieba.suggest_freq('侯亮平',True)
jieba.suggest_freq('钟小艾',True)
jieba.suggest_freq('陈岩石',True)
jieba.suggest_freq('欧阳菁',True)
jieba.suggest_freq('易学习',True)
jieba.suggest_freq('王大路',True)
jieba.suggest_freq('蔡成功',True)
jieba.suggest_freq('孙连城',True)
jieba.suggest_freq('季昌明',True)
jieba.suggest_freq('丁义珍',True)
jieba.suggest_freq('郑西坡',True)
jieba.suggest_freq('赵东来',True)
jieba.suggest_freq('高小琴',True)
jieba.suggest_freq('赵瑞龙',True)
jieba.suggest_freq('林华华',True)
jieba.suggest_freq('陆亦可',True)
jieba.suggest_freq('刘新建',True)
jieba.suggest_freq('刘庆祝',True)
jieba.suggest_freq('赵德汉',True)
with open('./data/in_the_name_of_people.txt', encoding='utf-8')as f:
    result_cut = []
    lines = f.readlines()
    for line in lines:
        result_cut.append(list(jieba.cut(line)))

f.close()
 
#添加自定义停用词
stopwords_list = [",","。","\n","\u3000","",":","!","?","…"]
def remove_stopwords(ls):  #去除停用词
    return [word for word in ls if word not in stopwords_list]
 
result_stop=[remove_stopwords(x)for x in result_cut if remove_stopwords(x)]
 
print(result_stop[100:103])

输出:

[[’ ', ‘像是’, ‘为’, ‘他’, ‘的’, ‘思路’, ‘做’, ‘注解’, ‘,’, ‘赵德汉’, ‘咀嚼’, ‘着’, ‘自由’, ‘时光’, ‘里’, ‘的’, ‘最后’, ‘一碗’, ‘炸酱面’, ‘,’, ‘抱怨’, ‘说’, ‘:’, ‘你们’, ‘反贪’, ‘总局’, ‘抓’, ‘贪官’, ‘怎么’, ‘抓到’, ‘我’, ‘这儿’, ‘来’, ‘了’, ‘?’, ‘哎’, ‘,’, ‘有’, ‘几个’, ‘贪官’, ‘住’, ‘这种’, ‘地方’, ‘?’, ‘七层’, ‘老楼’, ‘,’, ‘连’, ‘个’, ‘电梯’, ‘都’, ‘没有’, ‘,’, ‘要是’, ‘贪官’, ‘都’, ‘这’, ‘样子’, ‘,’, ‘老百姓’, ‘得’, ‘放鞭炮’, ‘庆贺’, ‘了’, ‘!’, ‘他’, ‘的’, ‘声音’, ‘被’, ‘面条’, ‘堵’, ‘在’, ‘嗓子眼’, ‘,’, ‘有些’, ‘呜呜’, ‘噜’, ‘噜’, ‘的’, ’ ‘], [’ ', ‘是’, ‘,’, ‘是’, ‘,’, ‘老赵’, ‘,’, ‘瞧’, ‘你’, ‘多’, ‘简朴’, ‘啊’, ‘,’, ‘一碗’, ‘炸酱面’, ‘就’, ‘对付’, ‘一顿’, ‘晚饭’, ’ ‘], [’ ', ‘赵德汉’, ‘吃’, ‘得’, ‘有滋有味’, ‘:’, ‘农民’, ‘的’, ‘儿子’, ‘嘛’, ‘,’, ‘好’, ‘这’, ‘一口’, ’ ']]

3.训练Word2Vec模型

from gensim.models import Word2Vec
model =Word2Vec(result_stop,#用于训练的语料数据
                vector_size=100,#是指特征向量的维度,默认为100。一个句子中当前单词和被预测单词的最大距离。
                window=5,min_count=1)#可以对字典做截断.词频少于min_count次数的单词会被丢弃掉,
 
#计算两个词的相似度
print(model.wv.similarity('沙瑞金','季昌明'))
print(model.wv.similarity('沙瑞金','田国富'))
#选出最相似的5个词
for e in model.wv.most_similar(positive=['沙瑞金'],topn=5):print(e[0],e[1])
 
odd_word =model.wv.doesnt_match(["苹果","香蕉","橙子","书"])
print(f"在这组词汇中不匹配的词汇:{odd_word}")
 
word_frequency = model.wv.get_vecattr("沙瑞金", "count")
print(f"沙瑞金: {word_frequency}")

打印出结果:

0.9992292
0.9993775
祁同伟 0.9995201741252625
高育良 0.9994417985196838
肖钢玉 0.9994159936904907
田国富 0.999377429485321
一下 0.99935382604599
在这组词汇中不匹配的词汇:书
沙瑞金: 353

  • 4
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值