eager execution

tensorflow's eager execution is an imprerative programming environment that evaluates operations immediately, without building graphs: operations return conrete value instead of constructing a computational graph to run later


eager exectution is a flexible machine learning platform for research and experimentation, providing:
(1)an intuitive interface--structure your code naturally and use python data structures. quickly iterate on small models and small data
(2)easier debugging--call ops directly to inspect running models and test changes. use standard python debugging tools for immediate error reporting
(3)natural control flow--use python control flow instead of graph control flow, simplifying the specification of dynamic models


to start eager execution, add tf.enable_eager_execution() to the beginning of the program or console session. do not add this operation to other modules that the program calls
###
tf.enable_eager_execution()


enabling eager execution changes how tensorflow operations behave--now they immediately evaluate and return their values to python. tf.Tensor objects reference concrete values instead of symbolic handles to nodes in computational graph. since there isn't a computational graph to build and run later in a session, it's easy to inspect results using print() or a debugger. evaluating, printing, and checking tensor values does not break the flow for computing gradients


eager execution works nicely with Numpy. Numpy operations accept tf.Tensor arguments. tensorflow math operations convert python objects and Numpy arrays to tf.Tensor objects. the tf.Tensor.numpy method returns the object's value as a Numpy ndarray


when composing layers into models you can use tf.keras.Sequential to represent models which are a linear stack of layers. it is easy to use for basic models


it's not required to set an input shape for the tf.keras.Model class since the parameters are set the first time input is passed to the layer


automatic differentiation is useful for implementing machine learning algorithms such as backpropagation for training neural networks. during eager execution, use tf.GradientTape to trace operations for computing gradients later
###
w = tfe.Variable([[1.0]])


with tf.GradientTape() as tape:
loss = w * w


grad = tape.gradient(loss, [w])
print(grad)


with graph execution, program state (such as the variables) is stored in global collections and their lifetime is managed by the tf.Session object. in contrast, during eager execution the lifetime of the state objects is determined by the lifetime of their corresponding python object


tfe.Checkpoint can save and restore tfe.Variable to and from checkpoints


to save and load models, tfe.Checkpoint stores the internal state of objects, without requiring hidden variables. to record the state of a model, an optimizer, and a global step, pass them to a tfe.Checkpoint


tfe.metrics are stored as objects. update a metric by passing the new data to the callable, and retrieve the result using the tfe.metrics.result method


for compute-heavy medels, such as ResNet50 training on a GPU, eager execution performance is comparable to graph execution. but this gap grows larger for models with less computation and there is work to be done for optimizing hot code paths for models with lots of small operations


while eager execution makes development and debugging more interactive, tensorflow graph execution has advantages for distributed training, performance optimizations, and production deployment


use tf.data for input processing instead of queues. it's faster and easier


once eager execution is enabled with tf.enable_eager_execution, it cannot be turned off. start a new python session to return to graph execution


it's best to write code for both eager execution and graph execution. this gives you eager's interactive experimentation and debuggability with the distributed performance benefits of graph execution
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值