EDIT according to OP's comment
您可以展平输入要素向量以形成 [-1, n_type * n_features] ,应用精心选择的矩阵乘法并将输出从 [-1, n_type * n_neurons] 重新整形为 [-1, n_type, n_neurons]
操作张量将是块对角 [n_type * n_features, n_type * n_neurons] ,每个块是 weights 中的 n_type 张量之一 .
为了构建块对角矩阵,我使用了另一个答案(来自here)
这看起来像
inputs = tf.placeholder("float", shape=[None, n_type, n_features])
inputs = tf.reshape(inputs, shape=[-1, n_type * n_features])
weights = tf.Variable(FNN_weight_initializer([n_type, n_features, n_neurons]))
split_weights = tf.split(weights, num_or_size_splits=n_type, axis=1)
# each element of split_weights is a tensor of shape : [1, n_features, n_neurons] -> need to squeeze
split_weights = tf.map_fn(lambda elt : tf.squeeze(elt, axis=0), split_weights)
block_matrix = block_diagonal(split_weights) # from the abovementioned reference
Hidden1 = tf.matmul(inputs, block_matrix)
# shape : [None, n_type * n_neurons]
Hidden1 = tf.reshape(Hidden1, [-1, n_type, n_neurons])
# shape : [None, n_type, n_neurons]
Orignal answer
根据 tf.matmul (reference)的文档,您乘以的张量需要具有相同的等级 .
当等级为 >2 时,只有最后两个维度需要与矩阵乘法兼容,第一个其他维度需要完全匹配 .
因此,对于“是否有可能将rank3张量与tf.matmul相乘?”,答案是“是的,这是可能的,但从概念上讲,它仍然是2级乘法” .
因此,有必要进行一些重塑:
inputs = tf.placeholder("float", shape=[None, n_type, n_features])
inputs = tf.reshape(inputs, shape=[-1, n_type, 1, n_features])
weights = tf.Variable(FNN_weight_initializer([n_type, n_features, n_neurons]))
weights = tf.expand_dims(weights, 0)
# shape : [1, n_type, n_features, n_neurons]
weights = tf.tile(weights, [tf.shape(inputs)[0], 1, 1, 1])
# shape : [None, n_type, n_features, n_neurons]
Hidden1 = tf.matmul(inputs, weights)
# shape : [None, n_type, 1, n_neurons]
Hidden1 = tf.reshape(Hidden1, [-1, n_type, n_neurons])
# shape : [None, n_type, n_neurons]