java dm vector_Paragraph Vectors(doc2vec)的PyTorch实现

这是一个PyTorch实现的Paragraph Vectors(doc2vec)模型,采用Negative Sampling目标函数。提供了训练数据的并行预处理和GPU训练。用户需要准备csv文件,每个行代表一个文档,模型训练完成后可以导出训练得到的段落向量。
摘要由CSDN通过智能技术生成

Paragraph Vectors

68747470733a2f2f7472617669732d63692e6f72672f696e656a632f7061726167726170682d766563746f72732e7376673f6272616e63683d6d617374657268747470733a2f2f636f6465636f762e696f2f67682f696e656a632f7061726167726170682d766563746f72732f6272616e63682f6d61737465722f67726170682f62616467652e73766768747470733a2f2f636f6465626561742e636f2f6261646765732f65353030386164302d323430632d343865392d613135382d32353437393839623739386568747470733a2f2f6170692e636f646163792e636f6d2f70726f6a6563742f62616467652f47726164652f6338363530363761613431393431383461653063363439623836356231666432

A PyTorch implementation of Paragraph Vectors (doc2vec).

dmdbow.png?raw=true

All models minimize the Negative Sampling objective as proposed by T. Mikolov et al. [1]. This provides scope for sparse updates (i.e. only vectors of sampled noise words are used in forward and backward passes). In addition to that, batches of training data (with noise sampling) are generated in parallel on a CPU while the model is trained on a GPU.

Caveat emptor! Be warned that paragraph-vectors is in an early-stage development phase. Feedback, comments, suggestions, contributions, etc. are more than welcome.

Installation

Install PyTorch (follow the link for instructions).

Install the paragraph-vectors library.

git clone https://github.com/inejc/paragraph-vectors.git

cd paragraph-vectors

pip install -e .

Note that installation in a virtual environment is the recommended way.

Usage

Put a csv file in the data directory. Each row represents a single document and the first column should always contain the text. Note that a header line is mandatory.

data/example.csv

----------------

text,...

"In the week before their departure to Arrakis, when all the final scurrying about had reached a nearly unbearable frenzy, an old crone came to visit the mother of the boy, Paul.",...

"It was a warm night at Castle Caladan, and the ancient pile of stone that had served the Atreides family as home for twenty-six generations bore that cooled-sweat feeling it acquired before a change in the weather.",...

...

Run train.py with selected parameters (models are saved in the models directory).

python train.py start --data_file_name 'example.csv' --num_epochs 100 --batch_size 32 --num_noise_words 2 --vec_dim 100 --lr 1e-3

Parameters

data_file_name: str

Name of a file in the data directory.

model_ver: str, one of ('dm', 'dbow'), default='dbow'

Version of the model as proposed by Q. V. Le et al. [5], Distributed Representations of Sentences and Documents. 'dbow' stands for Distributed Bag Of Words, 'dm' stands for Distributed Memory.

vec_combine_method: str, one of ('sum', 'concat'), default='sum'

Method for combining paragraph and word vectors when model_ver='dm'. Currently only the 'sum' operation is implemented.

context_size: int, default=0

Half the size of a neighbourhood of target words when model_ver='dm' (i.e. how many words left and right are regarded as context). When model_ver='dm' context_size has to greater than 0, when model_ver='dbow' context_size has to be 0.

num_noise_words: int

Number of noise words to sample from the noise distribution.

vec_dim: int

Dimensionality of vectors to be learned (for paragraphs and words).

num_epochs: int

Number of iterations to train the model (i.e. number of times every example is seen during training).

batch_size: int

Number of examples per single gradient update.

lr: float

Learning rate of the Adam optimizer.

save_all: bool, default=False

Indicates whether a checkpoint is saved after each epoch. If false, only the best performing model is saved.

generate_plot: bool, default=True

Indicates whether a diagnostic plot displaying loss value over epochs is generated after each epoch.

max_generated_batches: int, default=5

Maximum number of pre-generated batches.

num_workers: int, default=1

Number of batch generator jobs to run in parallel. If value is set to -1, total number of machine CPUs is used. Note that order of batches is not guaranteed when num_workers > 1.

Export trained paragraph vectors to a csv file (vectors are saved in the data directory).

python export_vectors.py start --data_file_name 'example.csv' --model_file_name 'example_model.dbow_numnoisewords.2_vecdim.100_batchsize.32_lr.0.001000_epoch.25_loss.0.981524.pth.tar'

Parameters

data_file_name: str

Name of a file in the data directory that was used during training.

model_file_name: str

Name of a file in the models directory (a model trained on the data_file_name dataset).

Example of trained vectors

First two principal components (1% cumulative variance explained) of 300-dimensional document vectors trained on arXiv abstracts. Shown are two subcategories from Computer Science. Dataset was comprised of 74219 documents and 91417 unique words.

learned_vectors_pca.png?raw=true

Resources

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值