用einsum实现MultiHeadAttention前向传播

einsum教程网站Einstein Summation in Numpy | Olexa Bilaniuk's IFT6266H16 Course Blog

编写训练模型

import tensorflow as tf


class Model(tf.keras.Model):
    def __init__(self, num_heads, model_dim):
        super().__init__()
        self.AttentionLayer = tf.keras.layers.MultiHeadAttention(num_heads=num_heads, key_dim=model_dim)
        self.OutputLayer = tf.keras.layers.Dense(units=1)

    def call(self, x):
        x = self.AttentionLayer(query=x, value=x)
        x = self.OutputLayer(x)
        return x


model = Model(num_heads=2, model_dim=4)

model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
              optimizer="Adam",
              metrics=['accuracy'])

input_train = tf.constant([[[1, 2, 3], [4, 5, 6]],
                           [[1, 1, 1], [2, 2, 6]]], dtype=tf.float32)

output_label = tf.constant([[[1], [0]],
                            [[0], [0]]], dtype=tf.float32)

tf_callback = tf.keras.callbacks.TensorBoard(log_dir="./logs")
model.fit(x=input_train,
          y=output_label,
          epochs=10,
          callbacks=[tf_callback])

tf.saved_model.save(model, 'MultiHeadAttention')

训练样本input_train的shape为(2, 2, 3)→(batch_size, sentence_length, Embedding_dim),经过模型后的输出shape为(2,2,1),标签值的shape为(2, 2, 1),损失函数选择的是二分类交叉熵。所以该模型可以应用于二分类性质的命名实体识别。简单来说,该模型可以判断一个句子中哪些词属于我们感兴趣的词类,哪些不属于。

打印模型矩阵参数

import tensorflow as tf


save_path = 'MultiHeadAttention/variables/variables'  #

reader = tf.train.load_checkpoint(save_path)  # 得到CheckpointReader

"""  打印Checkpoint中存储的所有参数名和参数shape """
for variable_name, variable_shape in reader.get_variable_to_shape_map().items():
    print(f'{variable_name} : {variable_shape}')



打印结果,

_CHECKPOINTABLE_OBJECT_GRAPH : []
optimizer/beta_1/.ATTRIBUTES/VARIABLE_VALUE : []
keras_api/metrics/0/count/.ATTRIBUTES/VARIABLE_VALUE : []
variables/5/.OPTIMIZER_SLOT/optimizer/v/.ATTRIBUTES/VARIABLE_VALUE : [2, 4]
optimizer/beta_2/.ATTRIBUTES/VARIABLE_VALUE : []
optimizer/learning_rate/.ATTRIBUTES/VARIABLE_VALUE : []
keras_api/metrics/0/total/.ATTRIBUTES/VARIABLE_VALUE : []
keras_api/metrics/1/count/.ATTRIBUTES/VARIABLE_VALUE : []
keras_api/metrics/1/total/.ATTRIBUTES/VARIABLE_VALUE : []
optimizer/decay/.ATTRIBUTES/VARIABLE_VALUE : []
variables/2/.OPTIMIZER_SLOT/optimizer/v/.ATTRIBUTES/VARIABLE_VALUE : [3, 2, 4]
optimizer/iter/.ATTRIBUTES/VARIABLE_VALUE : []
variables/0/.ATTRIBUTES/VARIABLE_VALUE : [3, 2, 4]
variables/0/.OPTIMIZER_SLOT/optimizer/m/.ATTRIBUTES/VARIABLE_VALUE : [3, 2, 4]
variables/0/.OPTIMIZER_SLOT/optimizer/v/.ATTRIBUTES/VARIABLE_VALUE : [3, 2, 4]
variables/1/.ATTRIBUTES/VARIABLE_VALUE : [2, 4]
variables/1/.OPTIMIZER_SLOT/optimizer/m/.ATTRIBUTES/VARIABLE_VALUE : [2, 4]
variables/5/.ATTRIBUTES/VARIABLE_VALUE : [2, 4]
variables/1/.OPTIMIZER_SLOT/optimizer/v/.ATTRIBUTES/VARIABLE_VALUE : [2, 4]
variables/6/.OPTIMIZER_SLOT/optimizer/v/.ATTRIBUTES/VARIABLE_VALUE : [2, 4, 3]
variables/2/.ATTRIBUTES/VARIABLE_VALUE : [3, 2, 4]
variables/2/.OPTIMIZER_SLOT/optimizer/m/.ATTRIBUTES/VARIABLE_VALUE : [3, 2, 4]
variables/3/.ATTRIBUTES/VARIABLE_VALUE : [2, 4]
variables/3/.OPTIMIZER_SLOT/optimizer/m/.ATTRIBUTES/VARIABLE_VALUE : [2, 4]
variables/3/.OPTIMIZER_SLOT/optimizer/v/.ATTRIBUTES/VARIABLE_VALUE : [2, 4]
variables/4/.ATTRIBUTES/VARIABLE_VALUE : [3, 2, 4]
variables/4/.OPTIMIZER_SLOT/optimizer/m/.ATTRIBUTES/VARIABLE_VALUE : [3, 2, 4]
variables/4/.OPTIMIZER_SLOT/optimizer/v/.ATTRIBUTES/VARIABLE_VALUE : [3, 2, 4]
variables/5/.OPTIMIZER_SLOT/optimizer/m/.ATTRIBUTES/VARIABLE_VALUE : [2, 4]
variables/9/.OPTIMIZER_SLOT/optimizer/m/.ATTRIBUTES/VARIABLE_VALUE : [1]
variables/6/.ATTRIBUTES/VARIABLE_VALUE : [2, 4, 3]
variables/6/.OPTIMIZER_SLOT/optimizer/m/.ATTRIBUTES/VARIABLE_VALUE : [2, 4, 3]
variables/7/.ATTRIBUTES/VARIABLE_VALUE : [3]
variables/8/.ATTRIBUTES/VARIABLE_VALUE : [3, 1]
variables/7/.OPTIMIZER_SLOT/optimizer/m/.ATTRIBUTES/VARIABLE_VALUE : [3]
variables/7/.OPTIMIZER_SLOT/optimizer/v/.ATTRIBUTES/VARIABLE_VALUE : [3]
variables/8/.OPTIMIZER_SLOT/optimizer/m/.ATTRIBUTES/VARIABLE_VALUE : [3, 1]
variables/8/.OPTIMIZER_SLOT/optimizer/v/.ATTRIBUTES/VARIABLE_VALUE : [3, 1]
variables/9/.ATTRIBUTES/VARIABLE_VALUE : [1]
variables/9/.OPTIMIZER_SLOT/optimizer/v/.ATTRIBUTES/VARIABLE_VALUE : [1]

其中我们只需要注意如下参数,

variables/0/.ATTRIBUTES/VARIABLE_VALUE : [3, 2, 4]

variables/1/.ATTRIBUTES/VARIABLE_VALUE : [2, 4]

variables/5/.ATTRIBUTES/VARIABLE_VALUE : [2, 4]

variables/2/.ATTRIBUTES/VARIABLE_VALUE : [3, 2, 4]

variables/3/.ATTRIBUTES/VARIABLE_VALUE : [2, 4]

variables/4/.ATTRIBUTES/VARIABLE_VALUE : [3, 2, 4]

variables/6/.ATTRIBUTES/VARIABLE_VALUE : [2, 4, 3]

variables/7/.ATTRIBUTES/VARIABLE_VALUE : [3]
variables/8/.ATTRIBUTES/VARIABLE_VALUE : [3, 1]

variables/9/.ATTRIBUTES/VARIABLE_VALUE : [1]

这些参数代表着模型中所有的kernelbias,variable后面的序号代表着输入在模型中依次经历的运算。利用模型内部的计算逻辑以及shape分布,我们可以找出对应的kernelbias

值得注意的是,在tf.keras.layers.MultiHeadAttention源码中,query映射层是先进行计算的,后面依次为key映射层,value映射层。所以,

variables/0/.ATTRIBUTES/VARIABLE_VALUE : [3, 2, 4] 对应query映射层的kernel

variables/1/.ATTRIBUTES/VARIABLE_VALUE : [2, 4]对应query映射层的bias

variables/2/.ATTRIBUTES/VARIABLE_VALUE : [3, 2, 4]对应key映射层的kernel

variables/3/.ATTRIBUTES/VARIABLE_VALUE : [2, 4]对应key映射层的bias

variables/4/.ATTRIBUTES/VARIABLE_VALUE : [3, 2, 4]对应value映射层的kernel

variables/5/.ATTRIBUTES/VARIABLE_VALUE : [2, 4]对应value映射层的bias

variables/6/.ATTRIBUTES/VARIABLE_VALUE : [2, 4, 3]对应MutiHeadAttention中输出映射层的kernel

variables/7/.ATTRIBUTES/VARIABLE_VALUE : [3]对应MutiHeadAttention中输出映射层的bias

variables/8/.ATTRIBUTES/VARIABLE_VALUE : [3, 1]对应Dense层的kernel

variables/9/.ATTRIBUTES/VARIABLE_VALUE : [1]对应Dense层的bias

MultiHeadAttention

多头映射层

''' abc , cde -> abde 
    a : size of batch
    b : length of sequence
    c : dimension of Embedding
    d : number of heads
    e : dimension of output 
    
    x : input, shape=(batch_size, sequence_length, dim)
    y : kenel, shape=(dim, heads_num, output_dim) '''


def project_dense(x, y):
    output = np.empty((x.shape[0], x.shape[1], y.shape[1], y.shape[2]))
    for a in range(x.shape[0]):
        for b in range(x.shape[1]):
            for d in range(y.shape[1]):
                for e in range(y.shape[2]):
                    sum_dot = 0
                    for c in range(x.shape[2]):
                        sum_dot += x[a][b][c] * y[c][d][e]
                    output[a][b][d][e] = sum_dot
    return output

代码验证,

''' shape = (2, 2, 3) '''
x = tf.constant([[[1, 2, 3], [4, 5, 3]],
                 [[7, 8, 9], [10, 11, 12]]])

''' shape = (3, 2, 3) '''
y = tf.constant([[[1, 2, 3], [4, 5, 3]],
                 [[7, 8, 9], [10, 11, 12]],
                 [[7, 8, 9], [10, 11, 12]]])

print(tf.einsum('abc,cde->abde', x, y).numpy())
print(project_dense(x, y))



[[[[ 36  42  48]
   [ 54  60  63]]

  [[ 60  72  84]
   [ 96 108 108]]]

 [[[126 150 174]
   [198 222 225]]

  [[171 204 237]
   [270 303 306]]]]


[[[[ 36.  42.  48.]
   [ 54.  60.  63.]]

  [[ 60.  72.  84.]
   [ 96. 108. 108.]]]

 [[[126. 150. 174.]
   [198. 222. 225.]]

  [[171. 204. 237.]
   [270. 303. 306.]]]]

多头注意力层

计算注意力分数矩阵
''' aecd, abcd -> acbe 
    a : size of batch 
    e : number of key 
    b : number of query
    c : number of heads 
    d : dimension of query/key 
    x is key and y is query '''


def compute_AttentionScores(x, y):
    output = np.empty((x.shape[0], x.shape[2], y.shape[1], x.shape[1]))
    for a in range(x.shape[0]):
        for c in range(x.shape[2]):
            for b in range(y.shape[1]):
                for e in range(x.shape[1]):
                    sum_dot = 0
                    for d in range(x.shape[3]):
                        sum_dot += x[a][e][c][d] * y[a][b][c][d]
                    output[a][c][b][e] = sum_dot
    return output
根据注意力分数对value进行加权叠加
''' acbe,aecd->abcd 
    a : size of batch 
    c : number of heads
    b : number of query
    e : number of key/value 
    d : dimension of value 
    x is attention_scores and y is value'''


def Value_WeightedStack(x, y):
    output = np.empty((x.shape[0], x.shape[2], y.shape[2], y.shape[3]))
    for a in range(x.shape[0]):
        for b in range(x.shape[2]):
            for c in range(y.shape[2]):
                for d in range(y.shape[3]):
                    sum_dot = 0
                    for e in range(x.shape[3]):
                        sum_dot += x[a][c][b][e] * y[a][e][c][d]
                    output[a][b][c][d] = sum_dot
    return output

输出映射层

''' abcd, cde -> abe 
    a : size of batch 
    b : number of query
    c : number of head 
    d : dimension of value 
    e : dimension of output 
    x is WeightedStack_Value and y is kernel'''


def project_final(x, y):
    output = np.empty((x.shape[0], x.shape[1], y.shape[2]))
    for a in range(x.shape[0]):
        for b in range(x.shape[1]):
            for e in range(y.shape[2]):
                sum_dot = 0
                for c in range(x.shape[2]):
                    for d in range(x.shape[3]):
                        sum_dot += x[a][b][c][d] * y[c][d][e]
                output[a][b][e] = sum_dot
    return output

Dense

为了使得方便验证这次试验,在MultiHeadAttentionn后面添加一个Dense层。

'''abc, cd -> abd 
   x is input and y is kernel '''

def output_dense(x, y):
    output = np.empty((x.shape[0], x.shape[1], y.shape[1]))
    for a in range(x.shape[0]):
        for b in range(x.shape[1]):
            for d in range(y.shape[1]):
                sum_dot = 0
                for c in range(x.shape[2]):
                    sum_dot += x[a][b][c] * y[c][d]
                output[a][b][d] = sum_dot
    return output

构建模型的前向传播

在前面我们找出了模型内部所有的kenel和bias,接下来我们将打印出这些参数,并将这些参数添加到我们自己编写的模型中去,实现前向传播。

print(reader.get_tensor('variables/0/.ATTRIBUTES/VARIABLE_VALUE')) //k1
print(reader.get_tensor('variables/1/.ATTRIBUTES/VARIABLE_VALUE')) //b1
print(reader.get_tensor('variables/5/.ATTRIBUTES/VARIABLE_VALUE')) //b2
print(reader.get_tensor('variables/2/.ATTRIBUTES/VARIABLE_VALUE')) //k2
print(reader.get_tensor('variables/3/.ATTRIBUTES/VARIABLE_VALUE')) //b3
print(reader.get_tensor('variables/4/.ATTRIBUTES/VARIABLE_VALUE')) //k3
print(reader.get_tensor('variables/6/.ATTRIBUTES/VARIABLE_VALUE')) //MultiHead_output_kerner
print(reader.get_tensor('variables/7/.ATTRIBUTES/VARIABLE_VALUE')) //MultiHead_output_bias
print(reader.get_tensor('variables/8/.ATTRIBUTES/VARIABLE_VALUE')) //output_dense_kernel
print(reader.get_tensor('variables/9/.ATTRIBUTES/VARIABLE_VALUE')) //output_dense_bias
class my_model:
    def __init__(self, input):
        self.input = input
    def __call__(self):
        x = tf.cast(self.input,dtype=tf.double)
        value = tf.add(project_dense(x, k3), b2)
        key = tf.add(project_dense(x, k2), b3)
        query = tf.add(project_dense(x, k1), b1)
        attention_scores = tf.nn.softmax(compute_AttentionScores(key, query), axis=-1)
        Stacked_value = Value_WeightedStack(attention_scores, value)
        MUltiHead_output = tf.add(project_final(Stacked_value, MultiHead_output_kerner), MultiHead_output_bias)
        output = tf.add(output_dense(MUltiHead_output, output_dense_kernel), output_dense_bias)
        return output

验证

最后我们将分别打印出两个模型前向传播的输出(一个是自己实现的,另一个是TensorFlow实现的),并进行结果比对,看是否相差无几。

test_in = tf.constant([[[0.1, 0.1, 0.1], [0.1, 0.1, 0.1]]], dtype=tf.float32)
test_in0 = tf.constant([[[1, 1, 1], [1, 1, 1]]], dtype=tf.float32)
test_in1 = tf.constant([[[10, 10, 10], [10, 10, 10]]], dtype=tf.float32)
test_in2 = tf.constant([[[100, 100, 100], [100, 100, 100]]], dtype=tf.float32)
test_in3 = tf.constant([[[1000, 1000, 1000], [1000, 1000, 1000]]], dtype=tf.float32)
model = tf.saved_model.load('MultiHeadAttention')
print(model(test_in))
print(model(test_in0))
print(model(test_in1))
print(model(test_in2))
print(model(test_in3))

tf.Tensor(
[[[-0.02398136]
  [-0.02398136]]], shape=(1, 2, 1), dtype=float32)
tf.Tensor(
[[[-0.75980777]
  [-0.75980777]]], shape=(1, 2, 1), dtype=float32)
tf.Tensor(
[[[-8.1180725]
  [-8.1180725]]], shape=(1, 2, 1), dtype=float32)
tf.Tensor(
[[[-81.700714]
  [-81.700714]]], shape=(1, 2, 1), dtype=float32)
tf.Tensor(
[[[-817.52716]
  [-817.52716]]], shape=(1, 2, 1), dtype=float32)
test_in = tf.constant([[[0.1, 0.1, 0.1], [0.1, 0.1, 0.1]]], dtype=tf.float32)
test_in0 = tf.constant([[[1, 1, 1], [1, 1, 1]]], dtype=tf.float32)
test_in1 = tf.constant([[[10, 10, 10], [10, 10, 10]]], dtype=tf.float32)
test_in2 = tf.constant([[[100, 100, 100], [100, 100, 100]]], dtype=tf.float32)
test_in3 = tf.constant([[[1000, 1000, 1000], [1000, 1000, 1000]]], dtype=tf.float32)
print(my_model(test_in)())
print(my_model(test_in0)())
print(my_model(test_in1)())
print(my_model(test_in2)())
print(my_model(test_in3)())


tf.Tensor(
[[[-0.02398137]
  [-0.02398137]]], shape=(1, 2, 1), dtype=float64)
tf.Tensor(
[[[-0.75980776]
  [-0.75980776]]], shape=(1, 2, 1), dtype=float64)
tf.Tensor(
[[[-8.11807168]
  [-8.11807168]]], shape=(1, 2, 1), dtype=float64)
tf.Tensor(
[[[-81.70071091]
  [-81.70071091]]], shape=(1, 2, 1), dtype=float64)
tf.Tensor(
[[[-817.5271032]
  [-817.5271032]]], shape=(1, 2, 1), dtype=float64)

两个输出结果相差无几,验证成功。

此外值得注意的是,在整个MultiHeadAttention计算机制当中并没有出现激活函数。

`paddle.einsum` 模块实现Einstein Summation Convention (Einstein求和约定) 的功能。Einstein求和约定是一种简洁表示线性代数运算的方法,可以用于描述多个张量之间的乘积、求和、转置等操作。 `paddle.einsum` 函数的主要功能是根据指定的公式计算张量的乘积、求和或其他线性代数运算。它接受一个字符串参数 `equation`,表示所需的运算公式,以及多个输入张量作为参数。 以下是 `paddle.einsum` 函数的一般语法: ```python paddle.einsum(equation, *operands) ``` 其中,`equation` 是一个字符串,描述了所需的运算操作。它由输入张量的标签和输出张量的标签组成,以及用于描述运算关系的字符(如逗号、冒号等)。`operands` 是一个或多个输入张量。 `paddle.einsum` 函数支持各种线性代数运算操作,包括但不限于以下功能: - 张量乘法 - 张量转置 - 张量求和 - 张量扩展维度 - 张量压缩维度 通过使用不同的运算公式,可以实现不同的线性代数运算。具体的运算公式可以参考 Einstein 求和约定的规则。 以下是一个简单的示例,展示了如何使用 `paddle.einsum` 函数计算两个矩阵的乘积: ```python import paddle # 输入矩阵 a = paddle.to_tensor([[1, 2], [3, 4]]) b = paddle.to_tensor([[5, 6], [7, 8]]) # 计算矩阵乘积 c = paddle.einsum('ik,kj->ij', a, b) print(c) ``` 在上述示例中,我们定义了两个输入矩阵 `a` 和 `b`,然后使用 `paddle.einsum` 函数计算它们的乘积。运算公式 `'ik,kj->ij'` 表示按照 Einstein 求和约定,对矩阵 `a` 和 `b` 进行乘积运算,并输出结果矩阵。 请注意,`paddle.einsum` 函数在 PaddlePaddle 中的实现与 NumPy 中的实现略有不同,具体用法和规则可能会有所区别。建议查阅 PaddlePaddle 官方文档以获取更详细的信息和示例。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值