具体的公式随处可见,就不展开了讨论了,在build中定义了WQ,WK,WV三个可训练的权值矩阵(batch_size,len,embedding_size),通过对input相乘后将WQ与WK的转置矩阵相乘((batch_size,len,embedding_size)(batch_size,embedding_size,len)=(batch_size,len,len)),并对该结果进行softmax运算,再对WV进行扩维,扩维至(batch_size,len,len,embedding_size),扩维后将上一步运算结果与WV相乘得到((batch_size,len,len,1)(batch_size,len,len,embedding_size)=(batch_size,len,len,embedding_size)),再对第2维进行求和,即可得到最后的输出。
class MyAttention(Layer):
"""注意力机制
"""
def __init__(self, out_dim,key_size=8, **kwargs):
super(MyAttention, self).__init__(**kwargs)
self.out_dim = out_dim
self.key_size=key_size
def build(self, input_shape):
super(MyAttention, self).build(input_shape)
input_shape = list(input_shape)
if input_shape[