逻辑和
tf.reduce_all(input_tensor ,
axis = None , # 要减少的维度
keep_dims = False ,
name = None ,
reduction_indices = None
)
逻辑或
tf.reduce_any(input_tensor ,
axis = None , # 要减少的维度
keep_dims = False ,
name = None ,
reduction_indices = None)
tf.mod
(对应元素相除,取余数)
tf.equal
(对比矩阵相应的元素,相同True,否则false)
tf.cond(tf.equal(remainder, tf.zeros(tf.shape(remainder), dtype=tf.int32)),
lambda: x,
lambda: x + multiple - remainder)
(控制数据流向,tf.equal=true—x; else— x+multiple-remainder
tf.sequence_mask(lengths,
maxlen=None,
dtype=tf.bool,
name=None)
举例子说明
import tensorflow as tf
a = tf.sequence_mask([1, 2, 3], 5)
b = tf.sequence_mask([[1, 2], [3, 4]])
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(a))
print(sess.run(b))
输出
[[ True False False False False]
[ True True False False False]
[ True True True False False]]
解析:maxlen是5,所以一共有5列,lengths有三个元素[1,2,3],所以有三行,每一行分别前1、2、3个元素为True
[[[ True False False False]
[ True True False False]]
[[ True True True False]
[ True True True True]]]
解析:因为没有指定maxlen,故maxlen默认取lengths中的最大值4,所以一共有4列,lengths是二维数组,将其看作由两个一维lengths组成,所以输出也可以看作由这两个一维lengths的输出所组成
查看tensor的维度
x=tf.constant([0,1,2])
x_shape = tf.shape(x)
print(sess.run(x_shape))
loss = tf.losses.mean_squared_error(labels=mel_targets,
predictions=outputs,
weights=weights)
实际进行的操作是 label-prediction,然后按照weights归一求均方差
举个例子
mel_targets = tf.constant([[0,1,2,3],[4,5,6,7],[8,9,10,11]])
outputs = tf.constant([[1,1,1,1],[1,1,1,1],[1,1,1,1]])
weights = tf.constant([[1,1,1,1],[1,1,1,1],[0,0,0,0]])
res = tf.losses.mean_squared_error(labels=mel_targets, predictions=outputs, weights=weights)
sess = tf.Session()
print(sess.run(res))
输出11.5
,实际上是对
label-output,然后前边行不变,最后一行为0,再求原来非零元素的均方差