优化器 :
基类:tf.train.optimizer()
基类下面的方法:
minimize(cost,<list of variables) ,返回一个节点,会更新当前step的gradients,variables,lost
compute_gradients(loss, <list of variables),# return grads_and_vars, grads_and_vars is a list of tuples (gradient, variable). Do whatever you need to the ‘gradient’ part, for example cap them, etc
apply_gradients(grads_and_vars),# Ask the optimizer to apply the capped gradients.
子类:class tf.train.GradientDescentOptimizer
class tf.train.AdagradOptimizer
class tf.train.MomentumOptimizer
class tf.train.AdamOptimizer
class tf.train.FtrlOptimizer
class tf.train.RMSPropOptimizer
注:tf.train.Optimizer.minimize() 返回的是节点,不返回var_list,返回operation.