报错源码:
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=tf.argmax(labels_holder,1), logits=label_predict)
correct_predict = tf.equal(tf.argmax(label_predict, 1), tf.argmax(labels_holder, 1))
accuracy = tf.reduce_mean(input_tensor=tf.cast(x=correct_predict, dtype=tf.float32))
train_op = tf.train.GradientDescentOptimizer(learning_rate=self.lr).minimize(loss=loss)
tf.summary.scalar(name="loss", tensor=loss)
tf.summary.scalar(name="accuracy", tensor=accuracy)
merged = tf.summary.merge_all()
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init_op)
writer = tf.summary.FileWriter(logdir="./summary/", graph=sess.graph)
for i in range(self.epoch_num):
batch_images, batch_labels = mnist.train.next_batch(self.batch_size)
batch_images = tf.reshape(tensor=batch_images, shape=[self.batch_size, 28, 28, 1])
batch_images = tf.image.resize_images(images=batch_images,size=(32,32))
sess.run(train_op, feed_dict={images_holder:batch_images.eval(), labels_holder:batch_labels})
accuracy_result = sess.run(accuracy, feed_dict={images_holder: batch_images.eval(), labels_holder: batch_labels})
summary_result = sess.run(fetches=merged, feed_dict={images_holder: batch_images.eval(), labels_holder: batch_labels})
writer.add_summary(summary=summary_result, global_step=i)
报错信息:
InvalidArgumentError (see above for traceback): tags and values not the same shape: [] != [100] (tag 'loss')
[[node loss (defined at /Users/jiweiwang/Git Repository/machine_learning_basis/tensorflow_practice/image_recognition/classical_network/LeNet-5/LeNet-5.py:104) = ScalarSummary[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](loss/tags, SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits)]]
这里出错的原因是定义的 loss 采用 sparse_softmax_cross_entropy_with_logits
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=tf.argmax(labels_holder,1), logits=label_predict)
查阅 TensorFlow API:https://tensorflow.google.cn/api_docs/python/tf/losses/sparse_softmax_cross_entropy
sparse_softmax_cross_entropy_with_logits 返回:Weighted loss Tensor
of the same type as logits
. If reduction
is NONE
, this has the same shape as labels
; otherwise, it is scalar.
即返回和输入标签具有相同形状的张量
解决方法:
将 tf.summary.scalar(name="loss", tensor=loss) 改为 tf.summary.histogram(name="loss", tensor=loss)