tf调不到keras怎么 回事_如何在tf.keras自定义丢失函数中触发python函数?

def update_priorities(self, traces_idxs, td_errors):

"""Updates the priorities of the traces with specified indexes."""

self.priorities[traces_idxs] = td_errors + eps

我试过用

tf.py_function

调用包装器函数,但只有当它嵌入到图中(即,如果它有输入和输出,并且使用输出)时才会调用它。因此,我试图通过一些张量而不对它们执行任何操作,函数现在被调用。以下是我的整个自定义丢失功能:

def masked_q_loss(data, y_pred):

"""Computes the MSE between the Q-values of the actions that were taken and the cumulative

discounted rewards obtained after taking those actions. Updates trace priorities.

"""

action_batch, target_qvals, traces_idxs = data[:,0], data[:,1], data[:,2]

seq = tf.cast(tf.range(0, tf.shape(action_batch)[0]), tf.int32)

action_idxs = tf.transpose(tf.stack([seq, tf.cast(action_batch, tf.int32)]))

qvals = tf.gather_nd(y_pred, action_idxs)

def update_priorities(_qvals, _target_qvals, _traces_idxs):

"""Computes the TD error and updates memory priorities."""

td_error = _target_qvals - _qvals

_traces_idxs = tf.cast(_traces_idxs, tf.int32)

mem.update_priorities(_traces_idxs, td_error)

return _qvals

qvals = tf.py_function(func=update_priorities, inp=[qvals, target_qvals, traces_idxs], Tout=[tf.float32])

return tf.keras.losses.mse(qvals, target_qvals)

但是,由于调用,我得到以下错误

mem.update_priorities(_traces_idxs, td_error)

ValueError: An operation has `None` for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.

我不需要计算梯度

update_priorities

,我只想在图形计算中的特定点调用它,而忽略它。我该怎么做?

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值