使用word2vec训练model的时候内存溢出MemoryError
分好词的文本文件不到1G,200维,真没搞懂为啥会溢出?难道真是train的时候分两次,第一次先加载词库,第二次再训练神经网络,所以超2G了,于是报错?
简单两行代码
sentences =word2vec.Text8Corpus(u'wikichinesepreprocessed.txt') # 加载语料
model = word2vec.Word2Vec(sentences,size=200,workers=multiprocessing.cpu_count())
报错
warnings.warn("detected Windows; aliasing chunkize to chunkize_serial")
UserWarning: detected Windows; aliasing chunkize to chunkize_serial
Traceback (most recent call last):
model = word2vec.Word2Vec(sentences,size=200,workers=multiprocessing.cpu_count())
Python36-32libsite-packagesgensimmodelsword2vec.py", line 503, in init
self.build_vocab(sentences, trim_rule=trim_rule)
Python36-32libsite-packagesgensimmodelsword2vec.py", line 579, in build_vocab
self.finalize_vocab(update=update) # build tables & arrays
Python36-32libsite-packagesgensimmodelsword2vec.py", line 752, in finalize_vocab
self.reset_weights()
Python36-32libsite-packagesgensimmodelsword2vec.py", line 1173, in reset_weights
self.syn1neg = zeros((len(self.wv.vocab), self.layer1_size), dtype=REAL)
MemoryError
尝试过按eachline分行读入,但是不行啊word2vec如果分行读入,一开始不能设置train参数,不知道会成什么样。
如果一开始设置model设置训练参数,就不能添加文本进行再训练
而如果用chunksize之类的分割成多个小文件,也同样面临上面的问题
有其他解决办法没啊?比如修改python的默认内存溢出设置