在过去的几个部分中,我们一直在尽力手工完成大部分工作。我们已经探索了词嵌入器的工作原理,并研究了分词、转换为输入ID、填充、截断和注意力掩码。
然而,正如我们在第2节中所见,🤗 Transformers API 可以通过一个我们将在此处深入研究的高级函数为我们处理所有这些问题,我们将在下面深入了解。当你直接调用tokenizer
在句子上时,你会得到准备好通过模型传递的输入:
from transformers import AutoTokenizer
checkpoint = "distilbert-base-uncased-finetuned-sst-2-english"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
sequence = "I've been waiting for a HuggingFace course my whole life."
model_inputs = tokenizer(sequence)
在这里,model_inputs
变量包含了模型正常运行所需的一切。对于DistilBERT,它包括输入ID和注意力掩码。其他接受额外输入的模型也会通过分词器对象输出这些信息。
如下面的示例所示,这种方法非常强大。首先,它可以对单个序列进行分词:
sequence = "I've been waiting for a HuggingFace course my whole life."
model_inputs = tokenizer(sequence)
它也能够同时处理多个序列,而API不变:
sequences = ["I've been waiting for a HuggingFace course my whole life.", "So have I!"]
model_inputs = tokenizer(sequences)
它可以根据不同的目标进行填充:
# 将序列填充到最长序列长度
model_inputs = tokenizer(sequences, padding="longest")
# 将序列填充到模型的最大长度(对于BERT或DistilBERT为512)
model_inputs = tokenizer(sequences, padding="max_length")
# 将序列填充到指定的最大长度
model_inputs = tokenizer(sequences, padding="max_length", max_length=8)
它还可以截断序列:
sequences = ["I've been waiting for a HuggingFace course my whole life.", "So have I!"]
# 将比模型最大长度(对于BERT或DistilBERT为512)长的序列截断
model_inputs = tokenizer(sequences, truncation=True)
# 将比指定最大长度长的序列截断
model_inputs = tokenizer(sequences, max_length=8, truncation=True)
tokenizer
对象可以处理转换为特定框架张量,然后可以直接发送到模型。例如,在下面的代码示例中,我们提示tokenizer返回不同框架的张量——"pt"
返回PyTorch张量,"tf"
返回TensorFlow张量,"np"
返回NumPy数组:
sequences = ["I've been waiting for a HuggingFace course my whole life.", "So have I!"]
# 返回PyTorch张量
model_inputs = tokenizer(sequences, padding=True, return_tensors="pt")
# 返回TensorFlow张量
model_inputs = tokenizer(sequences, padding=True, return_tensors="tf")
# 返回NumPy数组
model_inputs = tokenizer(sequences, padding=True, return_tensors="np")
Special tokens
如果我们查看tokenizer返回的输入ID,我们会发现它们与之前有所不同:
sequence = "I've been waiting for a HuggingFace course my whole life."
model_inputs = tokenizer(sequence)
print(model_inputs["input_ids"])
tokens = tokenizer.tokenize(sequence)
ids = tokenizer.convert_tokens_to_ids(tokens)
print(ids)
[101, 1045, 1005, 2310, 2042, 3403, 2005, 1037, 17662, 12172, 2607, 2026, 2878, 2166, 1012, 102]
[1045, 1005, 2310, 2042, 3403, 2005, 1037, 17662, 12172, 2607, 2026, 2878, 2166, 1012]
一个ID在开始时添加,一个在结束时添加。让我们解码上述两个ID序列,看看这是怎么回事:
print(tokenizer.decode(model_inputs["input_ids"]))
print(tokenizer.decode(ids))
"[CLS] i've been waiting for a huggingface course my whole life. [SEP]"
"i've been waiting for a huggingface course my whole life."
tokenizer在开始时添加了特殊词 [CLS]
,在结束时添加了特殊词 [SEP]
。这是因为模型在预训练时使用了这些词,因此为了获得相同的推理结果,我们还需要添加它们。注意,有些模型不添加特殊词,或者使用不同的词,模型也可以仅在开头或末尾添加这些特殊单词。无论如何,tokenizer知道预期的特殊词,并会为你处理这些。
Wrapping up: From tokenizer to model
现在我们已经看到了tokenizer
对象在处理文本时的各个步骤,让我们再次看看它是如何处理多个序列(填充!)、非常长的序列(截断!)以及多种类型的张量的:
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
预训练检查点 = "distilbert-base-uncased-finetuned-sst-2-english"
tokenizer = AutoTokenizer.from_pretrained(预训练检查点)
model = AutoModelForSequenceClassification.from_pretrained(预训练检查点)
序列 = ["我一生都在等待HuggingFace课程。", "我也是!"]
tokens = tokenizer(序列, padding=True, truncation=True, return_tensors="pt")
output = model(**tokens)