representation learning for resource-constrained keyphrase generation


https://arxiv.org/pdf/2203.08118.pdf

摘要

问题:目前最好的关键词生成方法基于大型已标注数据,当只有有限已标注数据时,它们的性能会受限。
解决方法:面向数据的方法,基于无监督的语料库统计信息识别显著信息(salient information),然后基于预训练语言模型学习一个任务相关的中间表示。
训练目标:salient span recovery和salient span prediction。
在这里插入图片描述

introduction

在这里插入图片描述

关键词生成有多方面的应用(document clustering、recommendation systems、information retrieval、text summarization、text classification)
在大规模的已标注数据上有许多方法(one2set、ExHiRD-h),但是当数据量变少时,他们的性能会受限。
本论文着重于low-resource problem,基于无监督的方法相对有监督的方法会表现不佳,或者需要建设大量的数据库;还有一些半监督的方法和对抗生成的方法。
本论文在预训练语言模型基础上利用领域内未标记数据学习特定任务的表示,来促进低资源的关键词生成。
观察到关键词经常是文本的片段或salient的同义词,本论文受限明确这些信息(称为salient spans)。特别的,我们通过一个片段从大型池中检索相关文档的有效性来定义它的显著性,找到每个文档的salient spans以后,定义salient span recovery和salient span prediction作为进一步预训练Bart的目标,最后,在关键词生成任务上进一步微调。
GitHub

methods

problem definition

D k p = ( x i , p i ) D_{kp}=(x^i,p^i) Dkp=(xi,pi)是关键词生成的数据集
x i = ( x 1 i , . . . , x ∣ x i ∣ i ) x^i=(x^i_1,...,x^i_{|x^i|}) xi=(x1i,...,xxii)是输入的第i个文档
p i = ( p 1 i , . . . , p ∣ p i ∣ i ) p^i=(p^i_1,...,p^i_{|p^i|}) pi=(p1i,...,ppii)是输入的第i个文档对应的关键词集合
D a u x D_{aux} Daux是无标注的数据集,与 D k p D_{kp} Dkp同领域
y i = ( p 1 i [ S E P ] p 2 i [ S E P ] . . . [ S E P ] p ∣ p i ∣ i ) y^i=(p^i_1 [SEP] p^i_2 [SEP] ... [SEP] p^i_{|p^i|}) yi=(p1i[SEP]p2i[SEP]...[SEP]ppii)是关键词拼接后的句子

retrieval for salient spans

定义salient span是文档内连续的字符串(n-gram),可以通过BM25检索方法能从文档池中检索文档
任意一个文档 x i ∈ D a u x x^i \in D_{aux} xiDaux Q i = { q 1 i . . . q n i } Q^i=\{q_1^i...q_n^i\} Qi={q1i...qni}是n-grams的候选集合,定义query q和document x in D a u x D_{aux} Daux之间的BM25分数是BM25(x,q),则 r a n k ( q j i ) = ∣ x ′ ∈ D a u x : B M 25 ( x ′ , q j i ) > B M 25 ( x i , q j i ) ∣ rank(q_j^i)=|x^{'}\in D_{aux}:BM25(x^{'},q_j^i)>BM25(x^{i},q_j^i)| rank(qji)=xDaux:BM25(x,qji)>BM25(xi,qji)

通过对 r a n k ( q j i ) rank(q_j^i) rank(qji)实施一个过滤函数选择salient spans的集合 S i = { q j i ∈ Q i : r a n k ( q j i ) ≤ t h r e s h o l d ( ∣ q j i ∣ ) ) } S^i=\{q_j^i\in Q^i:rank(q_j^i)\leq threshold(|q_j^i|))\} Si={qjiQi:rank(qji)threshold(qji))},其中 t h r e s h o l d ( ) threshold() threshold()是基于span长度的最大可接受rank

in-domain representation learning

salient span recovery

S i = { s 1 i . . . s n i } S^i=\{s_1^i...s_n^i\} Si={s1i...sni} x i x^i xi的salient spans,训练期间, s j i s_j^i sji有概率的被损坏,非salient spans的其他单词也有概率的被损坏,得到最终输入 x S S R i x_{SSR}^i xSSRi,模型最小化交叉熵损失 L C E ( z i , x i ) L_{CE}(z^i,x^i) LCE(zi,xi)
SSR-M(被损坏的单词替换成[MASK])
SSR-D(被损坏的单词删除)

salient span prediction

最终输入 x S S R i x_{SSR}^i xSSRi的得到方式同上面。
目标是拼接的 x S S P i = ( s 1 i [ S E P ] s 2 i [ S E P ] . . . [ S E P ] s n i ) x^i_{SSP}=(s^i_1 [SEP] s^i_2 [SEP] ... [SEP] s^i_{n}) xSSPi=(s1i[SEP]s2i[SEP]...[SEP]sni)
模型最小化交叉熵损失 L C E ( z i , x S S P i ) L_{CE}(z^i,x^i_{SSP}) LCE(zi,xSSPi)
SSP-M(被损坏的单词替换成[MASK])
SSP-D(被损坏的单词删除)
在这里插入图片描述

git clone  https://github.com/xiaowu0162/low-resource-kpgen.git
cd low-resource-kpgen
docker run --gpus 2 -it --volumes-from datavol -v $PWD/:/usr/src/myapp --name low-resource-kpgen -w /usr/src/myapp pytorch/pytorch:1.5-cuda10.1-cudnn7-devel /bin/bash

apt-key adv --keyserver keyserver.ubuntu.com --recv-keys A4B469963BF863CC

apt-get update
apt-get install git
apt-get install vim
apt-get install wget
apt-get install lrzsz
apt-get install curl

pip install -r requirements.txt  -i http://pypi.douban.com/simple --trusted-host pypi.douban.com
git clone --branch model-experiment-0.10.2 https://github.com/xiaowu0162/fairseq.git
cd fairseq
pip install --editable ./
cd ..

pip install transformers==3.2.0 -i http://pypi.douban.com/simple --trusted-host pypi.douban.com
pip install prettytable -i http://pypi.douban.com/simple --trusted-host pypi.douban.com

------------------------------------------------------------------------------------------------------------

# 默认会把所有任务的数据都处理掉
# kp20k|nus|inspec|krapivin|semeval
# kp20k-20k-1|kp20k-20k-2|kp20k-20k-3
# kptimes
cd data/scikp
https://drive.google.com/uc?export=download&id=1DbXV1mZXm_o9bgfwPV9PV0ZPcNo1cnLp
bash run.sh
# 未进行下面操作
cd ../kp20k-20k
bash run.sh
cd ../kptimes
https://drive.google.com/uc?export=download&id=1LGR62JPHL2-zesX5lT53KqfzHB78g0d_
https://drive.google.com/uc?export=download&id=1Fq3oZR99OTYKKe88-5ocwbdMT_CMdcEb
https://drive.google.com/uc?export=download&id=1F-HDwjI23f6nvtFiea-CGIO_IaKi_2fK
bash run.sh

cd finetuning_fairseq
bash preprocess.sh

# 需要等待一段时间
# 从下面的网址下载模型文件,放在models文件夹下
https://dl.fbaipublicfiles.com/fairseq/models/bart.base.tar.gz
https://dl.fbaipublicfiles.com/fairseq/models/bart.large.tar.gz

https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json
https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe
https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt

run_train.sh中默认kp20k是不需要训练的,但是我只处理了这个数据集,所以修改run_train.sh中的p87,改为可训练
bash run_train.sh 1 kp20k

# The hyperparameter settings are for training on kp20k-20k-1/2/3 with a single GPU. If you use more than 1 GPU, please make sure to reduce UPDATE_FREQ to achieve the same batch size.
# We recommend using an effective batch size of 64 or 32 for finetuning, where effective batch size = NUM_GPUs * PER_DEVICE_BSZ * UPDATE_FREQ
# The supported DATASET_NAME are kp20k, kptimes, kp20k-20k-1, kp20k-20k-2, kp20k-20k-3
# BART_PATH will default to the pre-trained BART model. You can start with other BART checkpoints (e.g., the intermediate representations pretrained with TI or SSR) by providing the corresponding checkpoint.pt files as parameters. To run randomly initialized BART, please remove the --restore-file flag in the script.

# 开始报错了
# ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject
# 这是numpy版本的问题,升级numpy即可
pip install -U numpy  -i http://pypi.douban.com/simple --trusted-host pypi.douban.com

# 会生成最终的checkpoint_best.pt
# 测试
bash run_test.sh 1 kp20k 20220727-0709_kp20k_checkpoints

# DATASET_NAME: use kp20k to evaluate on all five scientific datasets.
# SAVE_DIR: the path to the checkpoint (e.g., checkpoint_best.pt).
# dataset_hypotheses.txt contains the model's raw predictions.
# dataset_predictions.txt contains postprocessed predictions.
# results_log_dataset.txt contains all the scores.

------------------------------------------------------------------------------------------------------------

# 接下来到重头戏,因为SSR的效果最好
cd intermediate_learning/salient_span_recovery/bm25-spans
bash find_spans_bm25.sh

# 开始报错了,第一个错是没有java,es装不上
# 我下载的是jdk-8u291-linux-x64.tar.gz,解压缩后进行环境配置
https://www.cnblogs.com/lamp01/p/8932740.html

# 到目前为止,es安装成功,trec_eval安装成功,完成了find_spans_bm25.sh的前19行
# 下一步是启动es,因为脚本启动没有成功,所以我去看了一下在哪出问题了

cd /usr/src/myapp/intermediate_learning/salient_span_recovery/bm25-spans/bm25/CLIReval/external_tools
# 执行下面命令,启动,报错了
./elasticsearch-6.5.3/bin/elasticsearch
# Caused by: java.lang.RuntimeException: can not run elasticsearch as root
# 因为报错我也看不懂,只能看懂这句,不能用root账户启动es
# 那就新建个用户,专门启动它吧
# 创建用户
useradd user-es

# 创建所属组:
chown user-es:user-es -R /usr/src/myapp/intermediate_learning/salient_span_recovery/bm25-spans/bm25/CLIReval/external_tools/elasticsearch-6.5.3

# 切换到user-es用户
su user-es

# 启动elasticsearch
./elasticsearch-6.5.3/bin/elasticsearch
# 退出当前用户
exit

# 查看9200端口的使用情况
curl http://localhost:9200

# 修改bm25/CLIReval/scripts/server.sh
function start_server {
    echo "Starting ElasticSearch server"
    ES_JAVA_OPTS="-Xms${es_memory} -Xmx${es_memory}"
    su user-es -c "./external_tools/elasticsearch-6.5.3/bin/elasticsearch -d -Ehttp.port=9200"
    echo "PID: $(jps | grep Elasticsearch)"
    echo "Make sure server is up by running './scripts/server.sh status' and checking 'health status, etc' info is returned"
}

# 运行bash find_spans_bm25.sh时,会显示
2. Downloading ElasticSearch
Starting ElasticSearch server
PID: 1486 Elasticsearch
Make sure server is up by running './scripts/server.sh status' and checking 'health status, etc' info is returned

https://blog.csdn.net/smilehappiness/article/details/118466378
https://blog.csdn.net/Mrerlou/article/details/119250716

# 下面开始报第三个错
# ModuleNotFoundError: No module named 'retrieval'
# 估计作者默认的文件夹是retrieval,所以需要将引用这个包的都进行修改
# 我修改了bm25/build_db.py、bm25/build_es.py、bm25/doc_db.py

# 顺便把缺掉的包补全
pip install jsonlines  -i http://pypi.douban.com/simple --trusted-host pypi.douban.com
pip install scipy==1.5.4 -i http://pypi.douban.com/simple --trusted-host pypi.douban.com
pip install scikit-learn==0.24.0 -i http://pypi.douban.com/simple --trusted-host pypi.douban.com

# 下面开始报第四个错
07/29/2022 02:53:24 AM: [ Reading into database... ]
07/29/2022 02:53:24 AM: [ Reading into database... ]
Traceback (most recent call last):
  File "build_db.py", line 126, in <module>
    store_contents(args.files, args.save_path, args.preprocess, args.num_workers)
  File "build_db.py", line 92, in store_contents
    conn = sqlite3.connect(save_path)
sqlite3.OperationalError: unable to open database file

# bm25/pipeline_bm25_salient_span.sh 中会对数据进行处理,数据来源是DATA_DIR=${CODE_BASE_DIR}/retrieval/data,而CODE_BASE_DIR=`realpath ../..`,则DATA_DIR=/usr/src/myapp/intermediate_learning/salient_span_recovery/retrieval/data
# 经过观察,kp20k.train.jsonl在/usr/src/myapp/intermediate_learning/salient_span_recovery/bm25-spans/data
# 修改DATA_DIR=${CODE_BASE_DIR}/bm25-spans/data;
# 到目前为止,数据正常处理
07/29/2022 03:03:10 AM: [ Reading into database... ]
07/29/2022 03:03:10 AM: [ Reading into database... ]
1it [00:51, 51.10s/it]██████████████████████████████████████████████████████████████████████████████| 1/1 [00:51<00:00, 51.10s/it]
100%|███████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:51<00:00, 51.10s/it]
07/29/2022 03:04:01 AM: [ Read 509819 docs. ]
07/29/2022 03:04:01 AM: [ Read 509819 docs. ]
07/29/2022 03:04:01 AM: [ Committing... ]
07/29/2022 03:04:01 AM: [ Committing... ]
07/29/2022 03:04:19 AM: [ PUT http://localhost:9200/kp20k-train_search_test [status:200 request:2.185s] ]
07/29/2022 03:04:19 AM: [ HEAD http://localhost:9200/ [status:200 request:0.020s] ]

# 吐槽一下,bm25/pipeline_bm25_salient_span.sh整篇文档都没有KEYWORD_TYPE,es_search_salient_span.py还指定keyword必须要
# 建议删除这一行,然后将es_search_salient_span.py中keyword改为非必需
# 因为代码中的确没用到这的参数
python es_search_salient_span.py \
    --index_name ${DOMAIN}_search_test \
    --input_data_file ${DATA_DIR}/kp20k.${SUBSET}.jsonl \
    --output_fp $OUTFILE \
    --keyword $KEYWORD_TYPE \
    --n_docs $NDOCS \
    --port 9200;

# 下面来看看bm25/pipeline_bm25_salient_span.sh干了什么
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值