可以使用torch.stack(x,dim=1)
来达到让list里面的tensor都进行转置并且合并在一块儿
weights = []
for embedding in channel_embeddings: weights.append(torch.sum( torch.multiply(self.weights['attention'], torch.matmul(embedding, self.weights['attention_mat'])), 1)) score = F.softmax(torch.stack([weights], 1))
有时候要用pytorch重构tensorflow的代码,tf.transpose可以用torch.stack来替换