Word2vec Tutorial
I never got round to writing a tutorial on how to use word2vec in gensim. It’s simple enough and the API docs are straightforward, but I know some people prefer more verbose formats. Let this post be a tutorial and a reference example.
UPDATE: the complete HTTP server code for the interactive word2vec demo below is now open sourced on Github. For a high-performance similarity server for documents, see ScaleText.com.
Preparing the Input
Starting from the beginning, gensim’s word2vec expects a sequence of sentences as its input. Each sentence a list of words (utf8 strings):
1
2
3
4
5
6
7
|
# import modules & set up logging
import
gensim, logging
logging.basicConfig(
format
=
'%(asctime)s : %(levelname)s : %(message)s'
, level
=
logging.INFO)
sentences
=
[[
'first'
,
'sentence'
], [
'second'
,
'sentence'
]]
# train word2vec on the two sentences
model
=
gensim.models.Word2Vec(sentences, min_count
=
1
)
|
Keeping the input as a Python built-in list is convenient, but can use up a lot of RAM when the input is large.
Gensim only requires that the input must provide sentences sequentially, when iterated over. No need to keep everything in RAM: we can provide one sentence, process it, forget it, load another sentence…
For example, if our input is strewn across several files on disk, with one sentence per line, then instead of loading everything into an in-memory list, we can process the input file by file, line by line:
1
2
3
4
5
6
7
8
9
10
11
|
class
MySentences(
object
):
def
__init__(
self
, dirname):
self
.dirname
=
dirname
def
__iter__(
self
):
for
fname
in
os.listdir(
self
.dirname):
for
line
in
open
(os.path.join(
self
.dirname, fname)):
yield
line.split()
s
|