Word2Vec类介绍
定义
def __init__(self, sentences=None, size=100, alpha=0.025, window=5, min_count=5,
max_vocab_size=None, sample=1e-3, seed=1, workers=3, min_alpha=0.0001,
sg=0, hs=0, negative=5, cbow_mean=1, hashfxn=hash, iter=5, null_word=0,
trim_rule=None, sorted_vocab=1, batch_words=MAX_WORDS_IN_BATCH, compute_loss=False, callbacks=()):
常用参数
sentences:数据类型为list,可以用BrownCorpus,Text8Corpus或lineSentence来构建sentences
size:向量维度,默认为100
window:当前词与预测次在一个句子中最大距离是多少
min_count:用于字典阶段,词频少于min_count次数的单词会被丢弃掉,默认为5
workers:控制训练的并行数
sg:训练算法,默认为0,对应CBOW算法,sg为1采用skip-gram算法
训练方式一:
dim=300
embedding_size = dim
model = gensim.models.Word2Vec(LineSentence(model_dir + 'train_word.txt'),
size=embedding_size,
window=5,
min_count=10,
workers=multiprocessing.cpu_count())
model.save(model_dir + "word2vec_gensim"+str(embedding_size)+".w2v")
model.wv.save_word2vec_format(model_dir + "word2vec_gensim_300d.txt", binary=False)
训练方式2:
documents = list(LineSentence(model_dir + 'train_word.txt'))
print(len(documents))
print(documents[:10])
model = gensim.models.Word2Vec(documents, size=300)
model.train(documents, total_examples=len(documents), epochs=10)
model.save("./input/word2vec.w2v")
model.wv.save_word2vec_format("./input/word_gensim_300d.txt", binary=False)