面试的时候问了如何freeze 梯度,感觉自己tensorflow梯度处理这块很不熟,整理一下:
optimizer.minimize函数是结合了两个函数:1. compute_gradients 2. apply_gradients
compute_gradients主要传入 loss和trainable_variables(这里就可以把freeze的参数过滤掉),返回 (gradient, variable) 样式成对的 list。
apply_gradients 传入 (grads_and_vars)的pair list,返回一个执行梯度更新的ops。
一个opt只对部分参数优化
dnn_optimizer.minimize(
loss,
var_list=ops.get_collection(
ops.GraphKeys.TRAINABLE_VARIABLES,
scope=dnn_absolute_scope
)
)
tf.losses.add_loss(l2_reg_emb * tf.nn.l2_loss(tf.expand_dims(flat_val, 1), "fm_l2loss"), tf.GraphKeys.REGULARIZATION_LOSSES)
tensorflow冻结变量方法(tensorflow freeze variable)
https://www.cnblogs.com/hrlnw/p/10400057.html
tf.stop_gradient
# 只更新test_ids对应的embeddings
update_nodes = tf.nn.embedding_lookup(model.context_embeds, tf.squeeze(test_ids))
no_update_nodes = tf.nn.embedding_lookup(model.context_embeds,tf.squeeze(train_ids))
update_nodes = tf.scatter_nd(test_ids, update_nodes, tf.shape(model.context_embeds))
# stop train_ids对应的embeddings的更新
no_update_nodes = tf.stop_gradient(tf.scatter_nd(train_ids, no_update_nodes, tf.shape(model.context_embeds)))
model.context_embeds = update_nodes + no_update_nodes
sess.run(model.context_embeds)
tensorflow中的batch_norm以及tf.control_dependencies和tf.GraphKeys.UPDATE_OPS的探究
https://blog.csdn.net/huitailangyz/article/details/85015611
tensorflow API:梯度修剪apply_gradients和compute_gradients
https://blog.csdn.net/NockinOnHeavensDoor/article/details/80632677
graphsage中也有对应的代码
grads_and_vars = self.optimizer.compute_gradients(self.loss)
clipped_grads_and_vars = [(tf.clip_by_value(grad, -5.0, 5.0) if grad is not None else None, var)
for grad, var in grads_and_vars]
self.grad, _ = clipped_grads_and_vars[0]
self.opt_op = self.optimizer.apply_gradients(clipped_grads_and_vars)