Transformers 之Glossary(术语)

https://huggingface.co/transformers/glossary.html

1 Input IDs

模型的输入,为序列经过tokenize之后的数字表示。推荐使用encode 或encode_plus方法。这两个方法会自动加上[CLS]、[SEP]等标记。

The input ids are often the only required parameters to be passed to the model as input. They are token indices, numerical representations of tokens building the sequences that will be used as input by the model.
Each tokenizer works differently but the underlying mechanism remains the same.

############
# Input IDs
############
import warnings
warnings.filterwarnings("ignore")
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-cased")

sequence = "A Titan RTX has 24GB of VRAM"

# The tokenizer takes care of splitting the sequence into tokens available in the tokenizer vocabulary.
tokenized_sequence = tokenizer.tokenize(sequence)
assert tokenized_sequence == ['A', 'Titan', 'R', '##T', '##X', 'has', '24', '##GB', 'of', 'V', '##RA', '##M']

'''
These tokens can then be converted into IDs which are understandable by the model. 
Several methods are available for this, the recommended being encode or encode_plus, 
which leverage the Rust implementation of huggingface/tokenizers for peak performance.
'''
encoded_sequence = tokenizer.encode(sequence)
assert encoded_sequence == [101, 138, 18696, 155, 1942, 3190, 1144, 1572, 13745, 1104, 159, 9664, 2107, 102]
# The encode and encode_plus methods automatically add “special tokens” which are special IDs the model uses.
print(tokenizer.convert_ids_to_tokens(101))  # [CLS]
print(tokenizer.convert_ids_to_tokens(102))  # [SEP]
print(tokenizer.decode(encoded_sequence))  # [CLS] A Titan RTX has 24GB of VRAM [SEP]

2 Attention mask

遮罩,padding的地方为0,未padding的地方为1。使用encode_plus方法可以得到attention mask。

The attention mask is an optional argument used when batching sequences together. This argument indicates to the model which tokens should be attended to, and which should not.The attention mask is a binary tensor indicating the position of the padded indices so that the model does not attend to them. For the BertTokenizer, 1 indicate a value that should be attended to while 0 indicate a padded value.

############
# Attention mask
############
import warnings
warnings.filterwarnings("ignore")
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-cased")

sequence_a = "This is a short sequence."
sequence_b = "This is a rather long sequence. It is at least longer than the sequence A."

encoded_sequence_a = tokenizer.encode(sequence_a)
print(encoded_sequence_a)  # [101, 1188, 1110, 170, 1603, 4954, 119, 102]
assert len(encoded_sequence_a) == 8

encoded_sequence_b = tokenizer.encode(sequence_b)
print(encoded_sequence_b)  # [101, 1188, 1110, 170, 1897, 1263, 4954, 119, 1135, 1110, 1120, 1655, 2039, 1190, 1103, 4954, 138, 119, 102]
assert len(encoded_sequence_b) == 19


# the list of IDs will be extended by the padding indices
padded_sequence_a = tokenizer.encode(sequence_a, max_length=19, pad_to_max_length=True)
assert padded_sequence_a == [101, 1188, 1110, 170, 1603, 4954,  119, 102,    0,    0,    0,    0,    0,    0,    0,    0,   0,   0,   0]
assert encoded_sequence_b == [101, 1188, 1110, 170, 1897, 1263, 4954, 119, 1135, 1110, 1120, 1655, 2039, 1190, 1103, 4954, 138, 119, 102]
sequence_a_dict = tokenizer.encode_plus(sequence_a, max_length=19, pad_to_max_length=True)

assert sequence_a_dict['input_ids'] == [101, 1188, 1110, 170, 1603, 4954, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
assert sequence_a_dict['attention_mask'] == [1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]

3 Token Type IDs

第一个句子全部标记为0,第二个句子全部标记为1。(通常)

Some models’ purpose is to do sequence classification or question answering. These require two different sequences to be encoded in the same input IDs. They are usually separated by special tokens, such as the classifier and separator tokens.The Token Type IDs are a binary mask identifying the different sequences in the model.

############
# Token Type IDs
############
import warnings
warnings.filterwarnings("ignore")
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-cased")

# [CLS] SEQ_A [SEP] SEQ_B [SEP]

sequence_a = "HuggingFace is based in NYC"
sequence_b = "Where is HuggingFace based?"

encoded_sequence = tokenizer.encode(sequence_a, sequence_b)
assert tokenizer.decode(encoded_sequence) == "[CLS] HuggingFace is based in NYC [SEP] Where is HuggingFace based? [SEP]"

encoded_dict = tokenizer.encode_plus(sequence_a, sequence_b)

assert encoded_dict['input_ids'] == [101, 20164, 10932, 2271, 7954, 1110, 1359, 1107, 17520, 102, 2777, 1110, 20164, 10932, 2271, 7954, 1359, 136, 102]
assert encoded_dict['token_type_ids'] == [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

忧郁的常凯申

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值