NLP之Word2Vec实现(1)(gensim库)

NLP之Word2Vec 代码实现(gensim库)

小菜鸡,做记录

下载语料库

现在主要用维基百科中文语料库,附链接:https://dumps.wikimedia.org/zhwiki/
做中文训练,下载articles形式,文件后缀为.xml.bz2格式
做中文训练,随便选择一个日期,下载articles形式,文件后缀为**.xml.bz2**格式。选择大的训练速度比较慢,效果好一些,反之。

格式转换、繁简体转换

1. 格式转换:把xml.bz2格式转换为txt

# -*- coding:utf-8 -*-
import logging
import os.path
import sys
import warnings

warnings.filterwarnings(action='ignore', category=UserWarning, module='gensim')
from gensim.corpora import WikiCorpus

if __name__ == '__main__':
    program = os.path.basename(sys.argv[0])
    logger = logging.getLogger(program)

    logging.basicConfig(format='%(asctime)s: %(levelname)s: %(message)s')
    logging.root.setLevel(level=logging.INFO)
    logger.info("running %s" % ' '.join(sys.argv))

    # check and process input arguments
    if len(sys.argv) != 3:
        print("Using: python xml2txt.py input_file_name.xml.bz2 output_file_name.txt")
        sys.exit(1)
    inp, outp = sys.argv[1:3]
    space = " "
    i = 0

    output = open(outp, 'w', encoding='utf-8')
    wiki = WikiCorpus(inp, lemmatize=False, dictionary={})
    for text in wiki.get_texts():

        output.write(space.join(text) + "\n")
        i = i + 1
        if (i % 10000 == 0):
            logger.info("Saved " + str(i) + " articles")

    output.close()
    logger.info("Finished Saved " + str(i) + " articles")
# 将xml.bz2文件转化为txt文件
#python xml2txt.py zhwiki-20190820-pages-articles.xml.bz2 wikizh.txt
# python xml2txt.py zhwiki-20190820-pages-articles6.xml-p4731444p6231444.bz2 wikizh_300m.txt

代码来自官方process_wiki.py代码,最下面的注释内容为执行命令,自行修改

3. 繁简体转换:用opencc
下载opencc,同一级目录内执行命令即可,名字自行更改

#opencc -i wikizh_300m.txt -o wiki_zh_train_300m.txt -c t2s.json

分词

用jieba分词工具,中文常用,安装命令:pip install jieba
代码内文件名字,自行修改

import jieba
import jieba.analyse
import jieba.posseg as pseg
import codecs,sys

def cut_words(sentence):
    #print sentence
    return " ".join(jieba.cut(sentence)).encode('utf-8')

f=codecs.open('wiki_zh_train_300m.txt','r',encoding="utf8")
target = codecs.open("wiki_zh_train_seg_300m.txt", 'w',encoding="utf8")
print ('open files')
line_num=1
line = f.readline()
print('---- processing ', line_num, ' article----------------')

while line:
    # print('---- processing ', line_num, ' article----------------')
    line_seg = " ".join(jieba.cut(line))
    target.writelines(line_seg)
    line_num = line_num + 1
    line = f.readline()
print('articles:',line)
f.close()
target.close()
exit()
while line:
    curr = []
    for oneline in line:
        #print(oneline)
        curr.append(oneline)
    after_cut = map(cut_words, curr)
    target.writelines(after_cut)
    print ('saved',line_num,'articles')
    exit()
    line = f.readline1()
f.close()
target.close()

利用gensim构建模型

超级强大的NLP库gensim,内嵌模型,直接调用,最后一句执行命令,文件名自行修改。还要多学,了解太少

import logging
import os.path
import sys
import multiprocessing
from gensim.corpora import WikiCorpus
from gensim.models import Word2Vec
from gensim.models.word2vec import LineSentence
if __name__ == '__main__':
    
    program = os.path.basename(sys.argv[0])
    logger = logging.getLogger(program)
    logging.basicConfig(format='%(asctime)s: %(levelname)s: %(message)s')
    logging.root.setLevel(level=logging.INFO)
    logger.info("running %s" % ' '.join(sys.argv))
    if len(sys.argv) < 4:
        print (globals()['__doc__'] % locals())
        sys.exit(1)
    inp, outp1, outp2 = sys.argv[1:4]
    model = Word2Vec(LineSentence(inp), size=400, window=5, min_count=5, workers=multiprocessing.cpu_count())
    model.save(outp1)
    model.model.wv.save_word2vec_format(outp2, binary=False)
#python word2vec_model.py wiki_zh_train_seg_300m.txt wiki_zh_train_300m.model wiki_zh_train_300m.vector

计算词汇相似度

from gensim.models import Word2Vec

en_wiki_word2vec_model = Word2Vec.load('wiki_zh_text.model')

testwords = ['学习','语文','学术','运动','等会']
for i in range(5):
    res = en_wiki_word2vec_model.most_similar(testwords[i])
    print (testwords[i])
    print (res)
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值