错误2 :x and y must have the same dtype, got tf.float32 != tf.int32
解决方案 :
object_detection\core\losses.py 301
再devide 之前,添加 如下类型转换
self._logit_scale = tf.cast(self._logit_scale, tf.float32) # TODO change : Add cast to float to avoid devid error
更改之后如下:
class WeightedSoftmaxClassificationLoss(Loss):
“”“Softmax loss function.”“”
def init(self, logit_scale=1.0):
“”“Constructor.
Args:
logit_scale: When this value is high, the prediction is "diffused" and
when this value is low, the prediction is made peakier.
(default 1.0)
"""
self._logit_scale = logit_scale
def _compute_loss(self, prediction_tensor, target_tensor, weights):
“”“Compute loss function.
Args:
prediction_tensor: A float tensor of shape [batch_size, num_anchors,
num_classes] representing the predicted logits for each class
target_tensor: A float tensor of shape [batch_size, num_anchors,
num_classes] representing one-hot encoded classification targets
weights: a float tensor of shape [batch_size, num_anchors]
Returns:
loss: a float tensor of shape [batch_size, num_anchors]
representing the value of the loss function.
"""
self._logit_scale = tf.cast(self._logit_scale, tf.float32) # TODO change : Add cast to float to avoid devid error
num_classes = prediction_tensor.get_shape().as_list()[-1]
prediction_tensor = tf.divide(
prediction_tensor, self._logit_scale, name='scale_logit')
per_row_cross_ent = (tf.nn.softmax_cross_entropy_with_logits(
labels=tf.reshape(target_tensor, [-1, num_classes]),
logits=tf.reshape(prediction_tensor, [-1, num_classes])))
return tf.reshape(per_row_cross_ent, tf.shape(weights)) * weights