Transformer的一点理解,附一个简单例子理解attention中的QKV

Transformer用于目标检测的开山之作DETR,论文作者在附录最后放了一段简单的代码便于理解DETR模型。

DETR的backbone用的是resnet-50去掉了最后的AdaptiveAvgPool2d和Linear这两层。

self.backbone = nn.Sequential(*list(resnet50(pretrained=True).children())[:-2])

经过一次卷积加上position embedding,输入到transformer,position embedding是直接加和,不是像叠盘子一样的concat。

这里回顾一下transformer

transformer中最重要的attention,这篇文章Attention Is All You Need (Transformer) 论文精读 - 知乎

举了一个简单的例子,去解释attention中的QKV到底是什么含义。 这里

引用上述文章作者的例子:

如果我们有这样姓名和年龄一个数据库

张三:18
李四:22
张伟:19
张三:20

如果查询『所有叫张三的人的平均年龄』,Key==“张三”,可以得到Key对应的两个Value,算出(18+20)/2=19。我们把『所有叫张三的人的平均年龄』这句话称为一个查询(Query)

如果有另一个查询Query‘:『所有姓张的人的平均年龄』, Key[0]==“张”,得到三个Value:(18+20+19)/3=19

这样查询很低效,为了高效,将Query,Key转为向量vector。

将姓名(Key)汉字编码为向量

张三:[1, 2, 0]
李四:[0, 0, 2]
张伟:[1, 4, 0]

如果一个Quary是查询所有姓张的人的平均年龄,那么Quary可以写成向量  [1, 0, 0],将Quary向量和Key向量做点积

dot([1, 0, 0], [1, 2, 0]) = 1
dot([1, 0, 0], [1, 2, 0]) = 1
dot([1, 0, 0], [0, 0, 2]) = 0
dot([1, 0, 0], [1, 4, 0]) = 1

将结果softmax归一化

softmax([1, 1, 0, 1]) = [1/3, 1/3, 0, 1/3]

再将归一化后的结果与Value做点积

dot([1/3, 1/3, 0, 1/3], [18, 20, 22, 19]) = 19

就得到了想要的结果。(说句题外话,这样查询感觉跟布隆过滤器Bloom Filter有点相似的感觉,将文字编码成位数组)

这个计算就是Attention is all you need论文里Scaled Dot-Product Attention

 在transformer中,query key value关系如下图所示,(reference:The Illustrated Transformer – Jay Alammar – Visualizing machine learning one concept at a time.

 将文字编码为向量x,x与矩阵W相乘,得到q,q与k做点乘,再除8(the square root of the dimension of the key vectors used in the paper – 64),再softmax,再成v,得到z

 

 

下面几张图,从更宏观的角度展示这个过程

 

 

 

如果是多头注意力,在由a^{i}a^{i} q^{i} v^{i}这一步,比如说2个注意力头,就乘以2个矩阵,得到q^{i,1} q^{i,2}

将多个头注意力的结果concat起来,传入一个linear层,就得到了最终输出Z。(Transformer系列 | BearCoding

在RNN中,是按顺序输入,所以网络是知道每个输入的位置次序,但是transformer不是这样,因此还要加一个positional encoding,告诉网络输入的每个词在句子中的位置

 Transformer也使用了和resnet相似的残差连接。

将编码器得到的K,V矩阵输入到解码器

在解码的第一步中,输入K V,得到一个output,而在后续的解码中,将前一部的结果也一起输入解码器。比如第二步中,将第一步的结果 “I”也输入decoder,直到decoder给出 end of sentence为止。

transformer的损失函数,通过交叉熵,使两个分布相同

Output Vocabulary是提前建好的词库,网络输出的是词库中所有词 出现在这个位置的概率。

回到DETR,DETR中叠了6个transformer的encoder和decoder,将transformer输出再分别输入两个Linear,就得到了class和bbox。

是的,以下是一个使用Transformer模型进行文本分类的示例: ```python import tensorflow as tf import tensorflow_datasets as tfds # 加载IMDB数据集 (train_data, test_data), info = tfds.load('imdb_reviews/subwords8k', split=(tfds.Split.TRAIN, tfds.Split.TEST), with_info=True, as_supervised=True) encoder = info.features['text'].encoder BUFFER_SIZE = 10000 BATCH_SIZE = 64 # 训练数据和测试数据预处理 padded_shapes = ([None], ()) train_batches = (train_data.shuffle(BUFFER_SIZE).padded_batch(BATCH_SIZE, padded_shapes=padded_shapes)) test_batches = (test_data.padded_batch(BATCH_SIZE, padded_shapes=padded_shapes)) # Transformer模型定义 class TransformerModel(tf.keras.Model): def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size, target_vocab_size, dropout_rate=0.1): super(TransformerModel, self).__init__() self.encoder = tf.keras.layers.Embedding(input_vocab_size, d_model) self.pos_encoding = positional_encoding(input_vocab_size, d_model) self.transformer_blocks = [TransformerBlock(d_model, num_heads, dff, dropout_rate) for _ in range(num_layers)] self.dropout = tf.keras.layers.Dropout(dropout_rate) self.final_layer = tf.keras.layers.Dense(target_vocab_size) def call(self, inputs, training): input_seq, input_mask = inputs input_emb = self.encoder(input_seq) input_emb *= tf.math.sqrt(tf.cast(self.encoder.embedding_dim, tf.float32)) input_emb += self.pos_encoding[:input_emb.shape[1], :] x = self.dropout(input_emb, training=training) for i in range(len(self.transformer_blocks)): x = self.transformer_blocks[i](x, input_mask, training) x = tf.reduce_mean(x, axis=1) x = self.final_layer(x) return x # Transformer块定义 class TransformerBlock(tf.keras.layers.Layer): def __init__(self, d_model, num_heads, dff, dropout_rate=0.1): super(TransformerBlock, self).__init__() self.multi_head_attention = MultiHeadAttention(d_model, num_heads) self.feed_forward_network = point_wise_feed_forward_network(d_model, dff) self.layer_norm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6) self.layer_norm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6) self.dropout1 = tf.keras.layers.Dropout(dropout_rate) self.dropout2 = tf.keras.layers.Dropout(dropout_rate) def call(self, x, mask, training): attn_output, _ = self.multi_head_attention(x, x, x, mask) attn_output = self.dropout1(attn_output, training=training) out1 = self.layer_norm1(x + attn_output) ffn_output = self.feed_forward_network(out1) ffn_output = self.dropout2(ffn_output, training=training) out2 = self.layer_norm2(out1 + ffn_output) return out2 # 多头注意力机制定义 class MultiHeadAttention(tf.keras.layers.Layer): def __init__(self, d_model, num_heads): super(MultiHeadAttention, self).__init__() self.num_heads = num_heads self.d_model = d_model assert d_model % self.num_heads == 0 self.depth = d_model // self.num_heads self.wq = tf.keras.layers.Dense(d_model) self.wk = tf.keras.layers.Dense(d_model) self.wv = tf.keras.layers.Dense(d_model) self.dense = tf.keras.layers.Dense(d_model) def split_heads(self, x, batch_size): x = tf.reshape(x, (batch_size, -1, self.num_heads, self.depth)) return tf.transpose(x, perm=[0, 2, 1, 3]) def call(self, q, k, v, mask): batch_size = tf.shape(q)[0] q = self.wq(q) k = self.wk(k) v = self.wv(v) q = self.split_heads(q, batch_size) k = self.split_heads(k, batch_size) v = self.split_heads(v, batch_size) scaled_attention, attention_weights = scaled_dot_product_attention(q, k, v, mask) scaled_attention = tf.transpose(scaled_attention, perm=[0, 2, 1, 3]) concat_attention = tf.reshape(scaled_attention, (batch_size, -1, self.d_model)) output = self.dense(concat_attention) return output, attention_weights # 点式前馈网络定义 def point_wise_feed_forward_network(d_model, dff): return tf.keras.Sequential([ tf.keras.layers.Dense(dff, activation='relu'), tf.keras.layers.Dense(d_model) ]) # 编码器位置编码定义 def get_angles(pos, i, d_model): angle_rates = 1 / np.power(10000, (2 * (i // 2)) / np.float32(d_model)) return pos * angle_rates def positional_encoding(position, d_model): angle_rads = get_angles(np.arange(position)[:, np.newaxis], np.arange(d_model)[np.newaxis, :], d_model) angle_rads[:, 0::2] = np.sin(angle_rads[:, 0::2]) angle_rads[:, 1::2] = np.cos(angle_rads[:, 1::2]) pos_encoding = angle_rads[np.newaxis, ...] return tf.cast(pos_encoding, dtype=tf.float32) # 损失函数定义 loss_object = tf.keras.losses.BinaryCrossentropy(from_logits=True) def loss_function(real, pred): mask = tf.math.logical_not(tf.math.equal(real, 0)) loss_ = loss_object(real, pred) mask = tf.cast(mask, dtype=loss_.dtype) loss_ *= mask return tf.reduce_mean(loss_) # 评估指标定义 train_loss = tf.keras.metrics.Mean(name='train_loss') train_accuracy = tf.keras.metrics.BinaryAccuracy(name='train_accuracy') test_loss = tf.keras.metrics.Mean(name='test_loss') test_accuracy = tf.keras.metrics.BinaryAccuracy(name='test_accuracy') # 模型训练 EPOCHS = 10 num_layers = 4 d_model = 128 num_heads = 8 dff = 512 dropout_rate = 0.1 input_vocab_size = encoder.vocab_size target_vocab_size = 2 transformer = TransformerModel(num_layers, d_model, num_heads, dff, input_vocab_size, target_vocab_size, dropout_rate) optimizer = tf.keras.optimizers.Adam() for epoch in range(EPOCHS): train_loss.reset_states() train_accuracy.reset_states() test_loss.reset_states() test_accuracy.reset_states() for (batch, (input_seq, target)) in enumerate(train_batches): input_mask = tf.math.logical_not(tf.math.equal(input_seq, 0)) target = tf.expand_dims(target, axis=-1) with tf.GradientTape() as tape: predictions = transformer((input_seq, input_mask), True) loss = loss_function(target, predictions) gradients = tape.gradient(loss, transformer.trainable_variables) optimizer.apply_gradients(zip(gradients, transformer.trainable_variables)) train_loss(loss) train_accuracy(target, predictions) if batch % 100 == 0: print('Epoch {} Batch {} Train Loss {:.4f} Train Accuracy {:.4f}'.format( epoch + 1, batch, train_loss.result(), train_accuracy.result())) for (batch, (input_seq, target)) in enumerate(test_batches): input_mask = tf.math.logical_not(tf.math.equal(input_seq, 0)) target = tf.expand_dims(target, axis=-1) predictions = transformer((input_seq, input_mask), False) loss = loss_function(target, predictions) test_loss(loss) test_accuracy(target, predictions) print('Epoch {} Test Loss {:.4f} Test Accuracy {:.4f}'.format( epoch + 1, test_loss.result(), test_accuracy.result())) ``` 这个示例使用Transformer模型来对IMDB电影评论进行情感分析。它使用TensorFlow Datasets的IMDB数据集,将每个评论进行编码并将其输入到Transformer模型,以预测评论的情感(正面或负面)。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值