快速入门:Transformer的MindSpore实现
1. 项目概述
1.1 机器翻译
机器翻译,又称为自动翻译,是利用计算机将一种自然语言(源语言)转换为另一种自然语言(目标语言)的过程。它是计算语言学的一个分支,是人工智能的终极目标之一,具有重要的科学研究价值。
同时,机器翻译又具有重要的实用价值。随着经济全球化及互联网的飞速发展,机器翻译技术在促进政治、经济、文化交流等方面起到越来越重要的作用。
1.2 Transformer模型
「Transformer」 是2017年的一篇论文《Attention is All YouNeed》提出的一种模型架构,这篇论文里只针对机器翻译这一种场景做了实验,全面击败了当时的SOTA, 具体结构如下所示,并且由于encoder端是并行计算的,训练的时间被大大缩短了。
它开创性的思想,颠覆了以往序列建模和RNN划等号的思路,现在被广泛应用于NLP的各个领域。目前在NLP各业务全面开花的语言模型如GPT,BERT等,都是基于Transformer模型。因此弄清楚Transformer模型内部的每一个细节就显得尤为重要。
1.3 环境要求
- 硬件
- 带GPU显卡或华为Ascend处理器的服务器
- 软件
python3.5+
mindspore
numpy
2. 数据准备
2.1 数据集准备
整个项目实现分为训练数据集和测试数据集,整体数据集与原论文保持一致,即:
- 训练数据集(英语文本及对应的德语文本):WMT English-German(https://nlp.stanford.edu/projects/nmt/)
- 评估数据集(纯英语文本):WMT newstest2014(https://nlp.stanford.edu/projects/nmt/)
上述数据在对应的网址都可以正常下载,官方也提供了如下bash脚本快速下载:
#!/usr/bin/env bash
# Copyright 2017 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
BASE_DIR="$( cd "$( dirname "${
BASH_SOURCE[0]}" )/.." && pwd )"
OUTPUT_DIR="${1:-wmt16_de_en}"
echo "Writing to ${OUTPUT_DIR}. To change this, set the OUTPUT_DIR environment variable."
OUTPUT_DIR_DATA="${OUTPUT_DIR}/data"
mkdir -p $OUTPUT_DIR_DATA
echo "Downloading Europarl v7. This may take a while..."
curl -o ${OUTPUT_DIR_DATA}/europarl-v7-de-en.tgz \
https://www.statmt.org/europarl/v7/de-en.tgz
echo "Downloading Common Crawl corpus. This may take a while..."
curl -o ${OUTPUT_DIR_DATA}/common-crawl.tgz \
https://www.statmt.org/wmt13/training-parallel-commoncrawl.tgz
echo "Downloading News Commentary v11. This may take a while..."
curl -o ${OUTPUT_DIR_DATA}/nc-v11.tgz \
https://data.statmt.org/wmt16/translation-task/training-parallel-nc-v11.tgz
echo "Downloading dev/test sets"
curl -o ${OUTPUT_DIR_DATA}/dev.tgz \
https://data.statmt.org/wmt16/translation-task/dev.tgz
curl -o ${OUTPUT_DIR_DATA}/test.tgz \
https://data.statmt.org/wmt16/translation-task/test.tgz
# Extract everything
echo "Extracting all files..."
mkdir -p "${OUTPUT_DIR_DATA}/europarl-v7-de-en"
tar -xvzf "${OUTPUT_DIR_DATA}/europarl-v7-de-en.tgz" -C "${OUTPUT_DIR_DATA}/europarl-v7-de-en"
mkdir -p "${OUTPUT_DIR_DATA}/common-crawl"
tar -xvzf "${OUTPUT_DIR_DATA}/common-crawl.tgz" -C "${OUTPUT_DIR_DATA}/common-crawl"
mkdir -p "${OUTPUT_DIR_DATA}/nc-v11"
tar -xvzf "${OUTPUT_DIR_DATA}/nc-v11.tgz" -C "${OUTPUT_DIR_DATA}/nc-v11"
mkdir -p "${OUTPUT_DIR_DATA}/dev"
tar -xvzf "${OUTPUT_DIR_DATA}/dev.tgz" -C "${OUTPUT_DIR_DATA}/dev"
mkdir -p "${OUTPUT_DIR_DATA}/test"
tar -xvzf "${OUTPUT_DIR_DATA}/test.tgz" -C "${OUTPUT_DIR_DATA}/test"
# Concatenate Training data
cat "${OUTPUT_DIR_DATA}/europarl-v7-de-en/europarl-v7.de-en.en" \
"${OUTPUT_DIR_DATA}/common-crawl/commoncrawl.de-en.en" \
"${OUTPUT_DIR_DATA}/nc-v11/training-parallel-nc-v11/news-commentary-v11.de-en.en" \
> "${OUTPUT_DIR}/train.en"
wc -l "${OUTPUT_DIR}/train.en"
cat "${OUTPUT_DIR_DATA}/europarl-v7-de-en/europarl-v7.de-en.de" \
"${OUTPUT_DIR_DATA}/common-crawl/commoncrawl.de-en.de" \
"${OUTPUT_DIR_DATA}/nc-v11/training-parallel-nc-v11/news-commentary-v11.de-en.de" \
> "${OUTPUT_DIR}/train.de"
wc -l "${OUTPUT_DIR}/train.de"
# Clone Moses
if [ ! -d "${OUTPUT_DIR}/mosesdecoder" ]; then
echo "Cloning moses for data processing"
git clone https://github.com/moses-smt/mosesdecoder.git "${OUTPUT_DIR}/mosesdecoder"
fi
# Convert SGM files
# Convert newstest2014 data into raw text format
${OUTPUT_DIR}/mosesdecoder/scripts/ems/support/input-from-sgm.perl \
< ${OUTPUT_DIR_DATA}/dev/dev/newstest2014-deen-src.de.sgm \
> ${OUTPUT_DIR_DATA}/dev/dev/newstest2014.de
${OUTPUT_DIR}/mosesdecoder/scripts/ems/support/input-from-sgm.perl \
< ${OUTPUT_DIR_DATA}/dev/dev/newstest2014-deen-ref.en.sgm \
> ${OUTPUT_DIR_DATA}/dev/dev/newstest2014.en
# Convert newstest2015 data into raw text format
${OUTPUT_DIR}/mosesdecoder/scripts/ems/support/input-from-sgm.perl \
< ${OUTPUT_DIR_DATA}/dev/dev/newstest2015-deen-src.de.sgm \
> ${OUTPUT_DIR_DATA}/dev/dev/newstest2015.de
${OUTPUT_DIR}/mosesdecoder/scripts/ems/support/input-from-sgm.perl \
< ${OUTPUT_DIR_DATA}/dev/dev/newstest2015-deen-ref.en.sgm \
> ${OUTPUT_DIR_DATA}/dev/dev/newstest2015.en
# Convert newstest2016 data into raw text format
${OUTPUT_DIR}/mosesdecoder/scripts/ems/support/input-from-sgm.perl \
< ${OUTPUT_DIR_DATA}/test/test/newstest2016-deen-src.de.sgm \
> ${OUTPUT_DIR_DATA}/test/test/newstest2016.de
${OUTPUT_DIR}/mosesdecoder/scripts/ems/support/input-from-sgm.perl \
< ${OUTPUT_DIR_DATA}/test/test/newstest2016-deen-ref.en.sgm \
> ${OUTPUT_DIR_DATA}/test/test/newstest2016.en
# Copy dev/test data to output dir
cp ${OUTPUT_DIR_DATA}/dev/dev/newstest20*.de ${OUTPUT_DIR}
cp ${OUTPUT_DIR_DATA}/dev/dev/newstest20*.en ${OUTPUT_DIR}
cp ${OUTPUT_DIR_DATA}/test/test/newstest20*.de ${OUTPUT_DIR}
cp ${OUTPUT_DIR_DATA}/test/test/newstest20*.en ${OUTPUT_DIR}
# Tokenize data
for f in ${OUTPUT_DIR}/*.de; do
echo "Tokenizing $f..."
${OUTPUT_DIR}/mosesdecoder/scripts/tokenizer/tokenizer.perl -q -l de -threads 8 < $f > ${f%.*}.tok.de
done
for f in ${OUTPUT_DIR}/*.en; do
echo "Tokenizing $f..."
${OUTPUT_DIR}/mosesdecoder/scripts/tokenizer/tokenizer.perl -q -l en -threads 8 < $f > ${f%.*}.tok.en
done
# Clean train corpora
for f in ${OUTPUT_DIR}/train.tok.en; do
fbase=${f%.*}
echo "Cleaning ${fbase}..."
${OUTPUT_DIR}/mosesdecoder/scripts/training/clean-corpus-n.perl $fbase de en "${fbase}.clean" 1 80
done
# Generate Subword Units (BPE)
# Clone Subword NMT
if [ ! -d "${OUTPUT_DIR}/subword-nmt" ]; then
git clone https://github.com/rsennrich/subword-nmt.git "${OUTPUT_DIR}/subword-nmt"
fi
# Learn Shared BPE
for merge_ops in 32000; do
echo "Learning BPE with merge_ops=${merge_ops}. This may take a while..."
cat "${OUTPUT_DIR}/train.tok.clean.de" "${OUTPUT_DIR}/train.tok.clean.en" | \
${OUTPUT_DIR}/subword-nmt/learn_bpe.py -s $merge_ops > "${OUTPUT_DIR}/bpe.${merge_ops}"
echo "Apply BPE with merge_ops=${merge_ops} to tokenized files..."
for lang in en de; do
for f in ${OUTPUT_DIR}/*.tok.${lang} ${OUTPUT_DIR}/*.tok.clean.${lang}; do
outfile="${f%.*}.bpe.${merge_ops}.${lang}"
${OUTPUT_DIR}/subword-nmt/apply_bpe.py -c "${OUTPUT_DIR}/bpe.${merge_ops}" < $f > "${outfile}"
echo ${outfile}
done
done
# Create vocabulary file for BPE
echo -e "<unk>\n<s>\n</s>" > "${OUTPUT_DIR}/vocab.bpe.${merge_ops}"
cat "${OUTPUT_DIR}/train.tok.clean.bpe.${merge_ops}.en" "${OUTPUT_DIR}/train.tok.clean.bpe.${merge_ops}.de" | \
${OUTPUT_DIR}/subword-nmt/get_vocab.py | cut -f1 -d ' ' >> "${OUTPUT_DIR}/vocab.bpe.${merge_ops}"
done
# Duplicate vocab file with language suffix
cp "${OUTPUT_DIR}/vocab.bpe.32000" "${OUTPUT_DIR}/vocab.bpe.32000.en"
cp "${OUTPUT_DIR}/vocab.bpe.32000" "${OUTPUT_DIR}/vocab.bpe.32000.de"
echo "All done."
执行上述脚本,可以得到下列文件:
- train.tok.clean.bpe.32000.en
- train.tok.clean.bpe.32000.de
- vocab.bpe.32000
- newstest2014.tok.bpe.32000.en
- newstest2014.tok.bpe.32000.de
- newstest2014.tok.de
2.2 MindRecord文件制作
2.1 定义tokenizer
tokenizer
负责对文本进行预处理,首先会将文本分词(或比词更细的粒度subwords
,标点符号…),将分词后的每个个体叫做tokens。然后会转化成一个一个id通过一个查询表(look-up table)。对文本做分词并不简单,要清楚的是每个模型都有其专属的tokenizer分词规则。transformer中有三种主要的tokenizers :Byte-Pair Encoding (BPE)
, WordPiece
, 和 SentencePiece
。
这里使用WordPiece
分词规则,即按照空格进行分词。核心代码如下:
def _whitespace_tokenize(self, text):
"""
Clean whitespace and split text into tokens.
"""
text = text.strip()
if not text:
tokens = []
else:
tokens = text.split()
return tokens
在得到各个句子的token
之后,在通过数据集中vocab_file
将所有的token转换为一个个的id,称为input_ids。
将文本做完上述处理后,就可以开始创建训练数据。由于本次是做机器翻译,所以训练数据应该是一对数据,即[英语;德语]。因此我们需要同样的创建一个训练Item来对应训练数据,核心代码如下:
def create_training_instance(source_words, target_words, max_seq_length, clip_to_max_len):
"""Creates `SampleInstance`s for a single sentence pair."""
EOS = "</s>"
SOS = "<s>"
if len(source_words) >= max_seq_length or len(target_words) >= max_seq_length:
if clip_to_max_len:
source_words = source_words[:min([len(source_words, max_seq_length-1)])]
target_words = target_words[:min([len(target_words, max_seq_length-1)])]
else:
return None
source_sos_tokens = [SOS] + source_words
source_eos_tokens = source_words + [EOS]
target_sos_tokens = [SOS] + target_words
target_eos_tokens = target_words + [EOS]
instance = SampleInstance(
source_sos_tokens=source_sos_tokens,
source_eos_tokens=source_eos_tokens,
target_sos_tokens=target_sos_tokens,
target_eos_tokens=target_eos_tokens)
return instance
为了让模型理解文本语义,还需要将文本转为向量。首先将所有句子的长度统一,利用上述转换的文本id计算出句子的最长值。核心代码如下:
def _find_bucket_length(source_tokens, target_tokens):
source_ids = tokenizer.convert_tokens_to_ids(source_tokens)
target_ids = tokenizer.convert_tokens_to_ids(target_tokens)
num = max(len(source_ids), len(target_ids))
assert num <= bucket[-1]
for index in range(