tensorflow API整理<1>----Training

本文详细介绍了TensorFlow中用于模型训练的各种工具和技术,包括优化器、梯度计算、梯度剪裁、学习率衰减等关键组件。同时,还探讨了如何使用协调器和队列运行器来管理并行任务,以及如何在分布式环境中部署TensorFlow应用。
摘要由CSDN通过智能技术生成

存下来备用,翻墙不易。

Training

Training包含了以下一些类,用于模型训练:

Optimizers,Gradient Computation,Gradient Clipping,Decaying the learning rate,Moving Averages,Coordinator and QueueRunner,Distributed execution,Reading Summaries from Event Files,Training Hooks,Training Utilities。



tf.train
 provides a set of classes and functions that help train models.

Optimizers优化器

The Optimizer base class provides methods to compute gradients for a loss and apply gradients to variables. A collection of subclasses implement classic optimization algorithms such as GradientDescent and Adagrad.

You never instantiate the Optimizer class itself, but instead instantiate one of the subclasses.

不需要实例化一个optimizer类,而是实例化该类的一个子类,用法如tf.train.AdadeltaOptimizer.init(learning_rate=0.001, rho=0.95, epsilon=1e-08, use_locking=False, name=’Adadelta’)

tf.train.Optimizer
tf.train.GradientDescentOptimizer
tf.train.AdadeltaOptimizer

tf.train.AdagradOptimizer

tf.train.AdagradOptimizer.init(
initial_accumulator_value=0.1,
name='Adagrad'
learning_rate,
use_locking=False

)

tf.train.AdagradDAOptimizer

tf.train.MomentumOptimizer
tf.train.AdamOptimizer
tf.train.FtrlOptimizer
tf.train.ProximalGradientDescentOptimizer
tf.train.ProximalAdagradOptimizer
tf.train.RMSPropOptimizer

Gradient Computation

TensorFlow provides functions to compute the derivatives for a given TensorFlow computation graph, adding operations to the graph. The optimizer classes automatically compute derivatives on your graph, but creators of new Optimizers or expert users can call the lower-level functions below.

Gradient Clipping梯度剪裁

TensorFlow provides several operations that you can use to add clipping functions to your graph. You can use these functions to perform general data clipping, but they're particularly useful for handling exploding or vanishing gradients.

Decaying the learning rate

Moving Averages

Some training algorithms, such as GradientDescent and Momentum often benefit from maintaining a moving average of variables during optimization. Using the moving averages for evaluations often improve results significantly.

Coordinator and QueueRunner

See Threading and Queues for how to use threads and queues. For documentation on the Queue API, see Queues.

Distributed execution

See Distributed TensorFlow for more information about how to configure a distributed TensorFlow program.

Reading Summaries from Event Files

See Summaries and TensorBoard for an overview of summaries, event files, and visualization in TensorBoard.

Training Hooks

Hooks are tools that run in the process of training/evaluation of the model.

Training Utilities

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值