PyTorch——自注意力(self-attention)机制实现(代码详解)

本文深入介绍了自注意力机制(self-attention),作为特征提取层,它能够融合输入特征并生成新的表示。多头自注意力机制进一步增强了这种能力,通过拆分向量为多个头,捕捉不同维度的信息。文中还提供了详细的PyTorch代码实现,展示如何构建self-attention层及其在Transformer模型中的应用。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

参考链接

  1. https://www.bilibili.com/video/BV1JE411g7XF?p=54
  2. https://arxiv.org/abs/1706.03762
  3. https://blog.csdn.net/qq_36653505/article/details/83375160

简述自注意力机制(self-attention)

self-attention可以视为一个特征提取层,给定输入特征 a 1 , a 2 , ⋅ ⋅ ⋅ a n a^{1},a^{2},\cdot \cdot \cdot a^{n} a1,a2,an,经过self-attention layer,融合每个输入特征,得到新的特征 b 1 , b 2 , ⋅ ⋅ ⋅ b n b^{1},b^{2},\cdot \cdot \cdot b^{n} b1,b2,bn。具体如下:

设输入特征为 I I I,分别将其乘以三个矩阵 W q W^{q} Wq W k W^{k} Wk W v W^{v} Wv得到 Q Q Q(query)、 K K K(key)和 V V V(value)三个矩阵;接下来使用矩阵 Q Q Q K K K的乘积得到注意力矩阵 A A A,归一化得到 A ^ \hat{A} A^;最后,将归一化后的注意力矩阵 A ^ \hat{A} A^乘上 V V V,得到最后的输出特征 O O O
在这里插入图片描述

多头自注意力机制(multi-head self-attention)

上述的self-attention中,每个输入特征 a i a^{i} ai乘上矩阵 W q W^{q} Wq W k W^{k} Wk W v W^{v} Wv后,分别得到一个向量 q i q^{i} qi k i k^{i} ki v i v^{i} vi,称为单头自注意力机制。如果将这些向量 q i q^{i} qi k i k^{i} ki v i v^{i} vi分裂为 n n n个就得到 n n n头自注意力机制了。公认多头自注意力机制的效果好于单头的,因为前者可以捕获更多维度的信息。示意图如下:
在这里插入图片描述

代码实现

设超参数num_attention_heads为自注意力机制的头数,如此,计算出每个头的维度attention_head_size。

self.num_attention_heads = num_attention_heads
self.attention_head_size = int(hidden_size / num_attention_heads)
self.all_head_size = hidden_size

定义 W q W^{q} Wq W k W^{k} Wk W v W^{v} Wv三个矩阵。

self.query = nn.Linear(input_size, self.all_head_size)
self.key = nn.Linear(input_size, self.all_head_size)
self.value = nn.Linear(input_size, self.all_head_size)

下面开始逐步计算,需要主要的是计算过程中张量维度的变化。
将输入特征乘以三个矩阵 W q W^{q} Wq W k W^{k} Wk W v W^{v} Wv,输出的张量此时还没有区分出多个头。维度变化为:input_tensor ( b a t c h , n , i n p u t _ s i z e ) \left ( batch,n,input\_size\right ) (batch,n,input_size)到mixed_query_layer ( b a t c h , n , a l l _ h e a d _ s i z e ) \left ( batch,n,all\_head\_size\right ) (batch,n,all_head_size)

mixed_query_layer = self.query(input_tensor)
mixed_key_layer = self.key(input_tensor)
mixed_value_layer = self.value(input_tensor)

切分为num_attention_heads个头,并变换维度。维度变化为:mixed_query_layer ( b a t c h , n , a l l _ h e a d _ s i z e ) \left ( batch,n,all\_head\_size\right ) (batch,n,all_head_size)到query_layer ( b a t c h , n u m _ a t t e n t i o n _ h e a d s , n , a t t e n t i o n _ h e a d _ s i z e ) \left ( batch,num\_attention\_heads,n,attention\_head\_size\right ) (batch,num_attention_heads,n,attention_head_size)

def transpose_for_scores(self, x):
   new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size)
   x = x.view(*new_x_shape)
   return x.permute(0, 2, 1, 3)

query_layer = self.transpose_for_scores(mixed_query_layer)
key_layer = self.transpose_for_scores(mixed_key_layer)
value_layer = self.transpose_for_scores(mixed_value_layer)

矩阵 Q Q Q K K K相乘,得到注意力矩阵,并除以向量的维度的开方,防止注意力分数随维度增大而增大。维度变化为:query_layer ( b a t c h , n u m _ a t t e n t i o n _ h e a d s , n , a t t e n t i o n _ h e a d _ s i z e ) \left ( batch,num\_attention\_heads,n,attention\_head\_size\right ) (batch,num_attention_heads,n,attention_head_size)到attention_scores ( b a t c h , n u m _ a t t e n t i o n _ h e a d s , n , n ) \left ( batch,num\_attention\_heads,n,n\right ) (batch,num_attention_heads,n,n)

attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2))

attention_scores = attention_scores / math.sqrt(self.attention_head_size)

注意力矩阵归一化。维度变化为:attention_scores ( b a t c h , n u m _ a t t e n t i o n _ h e a d s , n , n ) \left ( batch,num\_attention\_heads,n,n\right ) (batch,num_attention_heads,n,n)到attention_probs ( b a t c h , n u m _ a t t e n t i o n _ h e a d s , n , n ) \left ( batch,num\_attention\_heads,n,n\right ) (batch,num_attention_heads,n,n)

attention_probs = nn.Softmax(dim=-1)(attention_scores)

将注意力矩阵乘以矩阵 V V V。维度变化为:ttention_probs ( b a t c h , n u m _ a t t e n t i o n _ h e a d s , n , n ) \left ( batch,num\_attention\_heads,n,n\right ) (batch,num_attention_heads,n,n)乘以value_layer ( b a t c h , n u m _ a t t e n t i o n _ h e a d s , n , a t t e n t i o n _ h e a d _ s i z e ) \left ( batch,num\_attention\_heads,n,attention\_head\_size\right ) (batch,num_attention_heads,n,attention_head_size)到context_layer ( b a t c h , n u m _ a t t e n t i o n _ h e a d s , n , a t t e n t i o n _ h e a d _ s i z e ) \left ( batch,num\_attention\_heads,n,attention\_head\_size\right ) (batch,num_attention_heads,n,attention_head_size)

context_layer = torch.matmul(attention_probs, value_layer)

变换context_layer维度,为了后面将各头得到的结果拼接。这里的contiguous()是将tensor的内存变成连续的,为后面的view()做准备。维度变化为:context_layer ( b a t c h , n u m _ a t t e n t i o n _ h e a d s , n , a t t e n t i o n _ h e a d _ s i z e ) \left ( batch,num\_attention\_heads,n,attention\_head\_size\right ) (batch,num_attention_heads,n,attention_head_size)到context_layer ( b a t c h , n , n u m _ a t t e n t i o n _ h e a d s , a t t e n t i o n _ h e a d _ s i z e ) \left ( batch,n,num\_attention\_heads,attention\_head\_size\right ) (batch,n,num_attention_heads,attention_head_size)

context_layer = context_layer.permute(0, 2, 1, 3).contiguous()

将各头的结果拼接起来。维度变化为:context_layer ( b a t c h , n , n u m _ a t t e n t i o n _ h e a d s , a t t e n t i o n _ h e a d _ s i z e ) \left ( batch,n,num\_attention\_heads,attention\_head\_size\right ) (batch,n,num_attention_heads,attention_head_size)到context_layer ( b a t c h , n , a l l _ h e a d _ s i z e ) \left ( batch,n,all\_head\_size\right ) (batch,n,all_head_size)

new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,)
context_layer = context_layer.view(*new_context_layer_shape)

完整代码

class LayerNorm(nn.Module):
    def __init__(self, hidden_size, eps=1e-12):
        """Construct a layernorm module in the TF style (epsilon inside the square root).
        """
        super(LayerNorm, self).__init__()
        self.weight = nn.Parameter(torch.ones(hidden_size))
        self.bias = nn.Parameter(torch.zeros(hidden_size))
        self.variance_epsilon = eps

    def forward(self, x):
        u = x.mean(-1, keepdim=True)
        s = (x - u).pow(2).mean(-1, keepdim=True)
        x = (x - u) / torch.sqrt(s + self.variance_epsilon)
        return self.weight * x + self.bias
        
class SelfAttention(nn.Module):
    def __init__(self, num_attention_heads, input_size, hidden_size, hidden_dropout_prob):
        super(SelfAttention, self).__init__()
        if hidden_size % num_attention_heads != 0:
            raise ValueError(
                "The hidden size (%d) is not a multiple of the number of attention "
                "heads (%d)" % (hidden_size, num_attention_heads))
        self.num_attention_heads = num_attention_heads
        self.attention_head_size = int(hidden_size / num_attention_heads)
        self.all_head_size = hidden_size

        self.query = nn.Linear(input_size, self.all_head_size)
        self.key = nn.Linear(input_size, self.all_head_size)
        self.value = nn.Linear(input_size, self.all_head_size)

        self.attn_dropout = nn.Dropout(attention_probs_dropout_prob)

        # 做完self-attention 做一个前馈全连接 LayerNorm 输出
        self.dense = nn.Linear(hidden_size, hidden_size)
        self.LayerNorm = LayerNorm(hidden_size, eps=1e-12)
        self.out_dropout = nn.Dropout(hidden_dropout_prob)

    def transpose_for_scores(self, x):
        new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size)
        x = x.view(*new_x_shape)
        return x.permute(0, 2, 1, 3)

    def forward(self, input_tensor):
        mixed_query_layer = self.query(input_tensor)
        mixed_key_layer = self.key(input_tensor)
        mixed_value_layer = self.value(input_tensor)

        query_layer = self.transpose_for_scores(mixed_query_layer)
        key_layer = self.transpose_for_scores(mixed_key_layer)
        value_layer = self.transpose_for_scores(mixed_value_layer)

        # Take the dot product between "query" and "key" to get the raw attention scores.
        attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2))

        attention_scores = attention_scores / math.sqrt(self.attention_head_size)
        # Apply the attention mask is (precomputed for all layers in BertModel forward() function)
        # [batch_size heads seq_len seq_len] scores
        # [batch_size 1 1 seq_len]

        # attention_scores = attention_scores + attention_mask

        # Normalize the attention scores to probabilities.
        attention_probs = nn.Softmax(dim=-1)(attention_scores)
        # This is actually dropping out entire tokens to attend to, which might
        # seem a bit unusual, but is taken from the original Transformer paper.
        # Fixme
        attention_probs = self.attn_dropout(attention_probs)
        context_layer = torch.matmul(attention_probs, value_layer)
        context_layer = context_layer.permute(0, 2, 1, 3).contiguous()
        new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,)
        context_layer = context_layer.view(*new_context_layer_shape)
        hidden_states = self.dense(context_layer)
        hidden_states = self.out_dropout(hidden_states)
        hidden_states = self.LayerNorm(hidden_states + input_tensor)

        return hidden_states
### 回答1: 以下是一个简单的 self-attention 代码示例,使用 TensorFlow 实现: ```python import tensorflow as tf class SelfAttention(tf.keras.layers.Layer): def __init__(self, units): super(SelfAttention, self).__init__() self.W1 = tf.keras.layers.Dense(units) self.W2 = tf.keras.layers.Dense(units) self.V = tf.keras.layers.Dense(1) def call(self, inputs): # inputs shape: (batch_size, seq_len, embedding_dim) # hidden shape: (batch_size, seq_len, units) hidden = tf.nn.tanh(self.W1(inputs) + self.W2(inputs)) # score shape: (batch_size, seq_len, 1) score = self.V(hidden) # attention_weights shape: (batch_size, seq_len, 1) attention_weights = tf.nn.softmax(score, axis=1) # context_vector shape: (batch_size, units) context_vector = attention_weights * inputs context_vector = tf.reduce_sum(context_vector, axis=1) return context_vector ``` 这个 self-attention 层接受一个形状为 (batch_size, seq_len, embedding_dim) 的输入张量,其中 batch_size 是批量大小,seq_len 是序列长度,embedding_dim 是嵌入维度。它通过两个全连接层 W1 和 W2 将输入映射到一个隐藏表示,然后使用另一个全连接层 V 计算每个时间步的得分。得分通过 softmax 函数进行归一化,得到每个时间步的注意力权重。最后,将注意力权重乘以输入张量,得到上下文向量,作为 self-attention 层的输出。 使用这个 self-attention 层的示例代码: ```python import numpy as np # 创建一个 self-attentionattention_layer = SelfAttention(units=32) # 创建一个输入张量 inputs = tf.keras.Input(shape=(10, 16)) # 在输入张量上应用 self-attention 层 context_vector = attention_layer(inputs) # 创建一个模型 model = tf.keras.Model(inputs=inputs, outputs=context_vector) # 生成一些随机输入数据 x = np.random.randn(32, 10, 16) # 在模型上进行推理 output = model(x) print(output.shape) # 输出 (32, 32) ``` ### 回答2: 自注意力机制self-attention)是目前在自然语言处理领域广泛应用的一种机制自注意力机制在BERT、GPT-2等模型中占据了非常重要的位置,因此,掌握自注意力机制实现方法对进行文本处理任务非常重要。 而在TensorFlow中,实现自注意力机制也非常简单。下面是一个简单的TensorFlow代码示例: ```python import tensorflow as tf class SelfAttention(tf.keras.layers.Layer): def __init__(self, dim, num_heads): super(SelfAttention, self).__init__() # 创建查询、键、值的权重矩阵 self.query_weights = tf.keras.layers.Dense(units=dim) self.key_weights = tf.keras.layers.Dense(units=dim) self.value_weights = tf.keras.layers.Dense(units=dim) # 查询的分组数:即头的数量 self.num_heads = num_heads # 定义multi-head softmax层 self.multihead_softmax = tf.keras.layers.Dense(units=dim) def call(self, inputs): # inputs的shape:(batch_size, seq_len, embedding_size) # 生成查询、键、值 queries = self.query_weights(inputs) keys = self.key_weights(inputs) values = self.value_weights(inputs) # 将最后一维embedding_size分成num_heads份 queries = tf.reshape(queries, shape=(tf.shape(queries)[0], -1, self.num_heads, queries.shape[-1] // self.num_heads)) keys = tf.reshape(keys, shape=(tf.shape(keys)[0], -1, self.num_heads, keys.shape[-1] // self.num_heads)) values = tf.reshape(values, shape=(tf.shape(values)[0], -1, self.num_heads, values.shape[-1] // self.num_heads)) # 经过matmul计算得到attention分布 attention_matmul = tf.matmul(queries, keys, transpose_b=True) attention_score = tf.nn.softmax(attention_matmul / tf.math.sqrt(tf.cast(keys.shape[-1], dtype=tf.float32))) attention_output = tf.matmul(attention_score, values) # 对前两维进行reshape,再经过全连接层得到结果 attention_output = tf.reshape(attention_output, shape=(tf.shape(attention_output)[0], -1, attention_output.shape[-2] * attention_output.shape[-1])) output = self.multihead_softmax(attention_output) return output ``` 以上函数中,我们首先定义了一个SelfAttention类,该类继承了TensorFlow中的keras.layers.Layer类。在该类中,我们定义了查询、键、值的权重矩阵,以及多头softmax层。然后在call函数中,我们将输入进行查询、键、值的计算,然后分成多个头,经过matmul计算得到attention分布,最后将前两维进行reshape后再经过全连接层得到输出。 使用该SelfAttention类时,只需要在定义model时添加该层即可。例如: ```python import tensorflow as tf input = tf.keras.layers.Input(shape=(None, 512)) self_attention = SelfAttention(dim=512, num_heads=8)(input) model = tf.keras.models.Model(input, self_attention) ``` 以上代码示例可以将输入通过定义的SelfAttention层进行处理,然后输出self-attention后的结果。 ### 回答3: Self-attention自注意力)是一种用于自然语言处理和计算机视觉领域的自监督学习方法,它通过允许模型在输入序列中关注不同位置的信息来实现对序列数据的建模。代码tensorflow实现自注意力模型,使得开发者可以使用tensorflow库快速部署自注意力应用。 在代码tensorflow中,首先需要定义一个自注意力层。在该层中,输入数据被表示为一个矩阵,我们可以使用矩阵点积和softmax函数来计算每个注意头的输出: ```python class SelfAttention(tf.keras.layers.Layer): def __init__(self, units): super(SelfAttention, self).__init__() self.units = units self.W_q = tf.keras.layers.Dense(units=self.units) self.W_k = tf.keras.layers.Dense(units=self.units) self.W_v = tf.keras.layers.Dense(units=self.units) self.dense = tf.keras.layers.Dense(units=self.units) def call(self, inputs): Q = self.W_q(inputs) #[batch_size, query_length, depth] K = self.W_k(inputs) #[batch_size, key_length, depth] V = self.W_v(inputs) #[batch_size, value_length, depth] #计算分数,通过矩阵相乘 score = tf.matmul(Q, K, transpose_b=True) #缩放得分 depth = tf.cast(tf.shape(K)[-1], tf.float32) scaled_score = score / tf.math.sqrt(depth) #使用softmax函数计算权重 weights = tf.nn.softmax(scaled_score, axis=-1) #计算加权和 attention_output = tf.matmul(weights, V) #拼接所有头的输出 heads = tf.concat(tf.split(attention_output, num_or_size_splits=self.num_heads, axis=-1), axis=0) return self.dense(heads) ``` 然后,我们可以使用定义好的自注意力层来构建一个自注意力模型。该模型使用多头注意力,允许模型同时关注多个位置的信息。 ```python class SelfAttentionModel(tf.keras.Model): def __init__(self, num_heads, units, output_units): super(SelfAttentionModel, self).__init__() self.num_heads = num_heads self.self_attention = SelfAttention(units=units) self.output_layer = tf.keras.layers.Dense(units=output_units, activation='softmax') def call(self, inputs): self_attention_output = self.self_attention(inputs) return self.output_layer(self_attention_output) ``` 最后,我们可以使用tensorflow的训练方法来训练自注意力模型,并在测试集上评估其性能。同时,我们也可以使用训练好的模型来执行不同的自然语言处理和计算机视觉任务。
评论 56
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值