第二阶段-tensorflow程序图文详解(四) Graphs and Sessions

TensorFlow uses a dataflow graph to represent your computation in terms of the dependencies between individual operations. This leads to a low-level programming model in which you first define the dataflow graph, then create a TensorFlow session to run parts of the graph across a set of local and remote devices.
TensorFlow使用数据流图来呈现你的计算过程,我们将定义一个session来运行在一些本地或者远程的设备 ,使用一个底层程序模型。

This guide will be most useful if you intend to use the low-level programming model directly. Higher-level APIs such as tf.estimator.Estimator and Keras hide the details of graphs and sessions from the end user, but this guide may also be useful if you want to understand how these APIs are implemented.

如果直接使用底层程序运行模型,这个教程是很有用的。高层API程序将隐藏graph的细节。当然这个教程也是有用的,可以使你了解这些API的实现。

1,Why dataflow graphs?


这里写图片描述

Dataflow is a common programming model for parallel computing. In a dataflow graph, the nodes represent units of computation, and the edges represent the data consumed or produced by a computation. For example, in a TensorFlow graph, the tf.matmul operation would correspond to a single node with two incoming edges (the matrices to be multiplied) and one outgoing edge (the result of the multiplication).
数据流通常是一个并行计算的程序模型,在数据流图中,节点呈现一个计算的单元。边呈现宇数据消费或者有计算产生。例如:在tensorflow图中,tf.matmul操作被转换成一个节点,并且拥有两条输入边,一条输出边。

Dataflow has several advantages that TensorFlow leverages when executing your programs:
在tensorflow执行程序中,数据流拥有几个优势。

  1. Parallelism. By using explicit edges to represent dependencies
    between operations, it is easy for the system to identify operations
    that can execute in parallel.
    并行处理,非常容易定义并行操作。

  2. Distributed execution. By using explicit edges to represent the
    values that flow between operations, it is possible for TensorFlow
    to partition your program across multiple devices (CPUs, GPUs, and
    TPUs) attached to different machines. TensorFlow inserts the
    necessary communication and coordination between devices.
    分布式执行,显示的呈现多设备不同机器的运行协作,通信。

  3. Compilation. TensorFlow’s XLA compiler can use the information in
    your dataflow graph to generate faster code, for example, by fusing
    together adjacent operations.
    汇编,XLA编译器能够使用这些信息,快速生成代码。例如:可以融合相邻的操作。

  4. Portability. The dataflow graph is a language-independent
    representation of the code in your model. You can build a dataflow
    graph in Python, store it in a SavedModel, and restore it in a C++
    program for low-latency inference.
    移植性,数据流不依赖于语言,可以使用python建立模型,并保存。使用C++来加载模型。

    2,What is a tf.Graph?


A tf.Graph contains two relevant kinds of information:

  1. Graph structure. The nodes and edges of the graph, indicating how
    individual operations are composed together, but not prescribing how
    they should be used. The graph structure is like assembly code:
    inspecting it can convey some useful information, but it does not
    contain all of the useful context that source code conveys.
    图形结构。图的节点和边,指示如何
    个别的操作是组合在一起的,但是没有规定如何
    他们应该使用。图结构就像汇编代码:
    检查它可以传达一些有用的信息,但事实并非如此
    包含源代码传达的所有有用的上下文。

  2. Graph collections. TensorFlow provides a general mechanism for
    storing collections of metadata in a tf.Graph. The
    tf.add_to_collection function enables you to associate a list of
    objects with a key (where tf.GraphKeys defines some of the standard
    keys), and tf.get_collection enables you to look up all objects
    associated with a key. Many parts of the TensorFlow library use this
    facility: for example, when you create a tf.Variable, it is added by
    default to collections representing “global variables” and
    “trainable variables”. When you later come to create a
    tf.train.Saver or tf.train.Optimizer, the variables in these
    collections are used as the default arguments.
    图形集合。 TensorFlow提供了一个通用机制
    在tf.Graph中存储元数据的集合。该
    tf.add_to_collection函数使您能够关联一个列表
    对象与一个键(其中tf.GraphKeys定义了一些标准
    键),tf.get_collection使您能够查找所有对象
    与一个关键字相关联。 TensorFlow库的很多部分都使用它
    设施:例如,当你创建一个tf.Variable,它被添加
    默认为代表“全局变量”的集合
    “可训练变量”。当你以后来创建一个
    tf.train.Saver或tf.train.Optimizer,这些变量
    集合被用作默认参数。

3,Building a tf.Graph


Most TensorFlow programs start with a dataflow graph construction phase. In this phase, you invoke TensorFlow API functions that construct new tf.Operation (node) and tf.Tensor (edge) objects and add them to a tf.Graph instance. TensorFlow provides a default graph that is an implicit argument to all API functions in the same context. For example:
大多数TensorFlow程序从数据流图构建阶段开始。在这个阶段,你调用TensorFlow API函数来构造新的tf.Operation(node)和tf.Tensor(edge)对象并将它们添加到tf.Graph实例中。 TensorFlow提供了一个默认图形,它是对同一个上下文中的所有API函数的隐式参数。例如:

  1. Calling tf.constant(42.0) creates a single tf.Operation that
    produces the value 42.0, adds it to the default graph, and returns a
    tf.Tensor that represents the value of the constant.
    调用tf.constant(42.0)创建一个单独的tf.Operation
    生成值42.0,将其添加到默认图形,并返回一个
    tf.Tensor表示常数的值。

  2. Calling tf.matmul(x, y) creates a single tf.Operation that
    multiplies the values of tf.Tensor objects x and y, adds it to the
    default graph, and returns a tf.Tensor that represents the result of
    the multiplication.
    调用tf.matmul(x,y)创建一个单独的tf.Operation
    将tf.Tensor对象x和y的值相乘,并将其添加到
    默认图形,并返回一个表示结果的tf.Tensor
    乘法。

  3. Executing v = tf.Variable(0) adds to the graph a tf.Operation that
    will store a writeable tensor value that persists between
    tf.Session.run calls. The tf.Variable object wraps this operation,
    and can be used like a tensor, which will read the current value of
    the stored value. The tf.Variable object also has methods such as
    assign and assign_add that create tf.Operation objects that, when
    executed, update the stored value. (See Variables for more
    information about variables.)
    .执行v = tf.Variable(0)将tf.Operation添加到图中
    将存储可持续的可写张量值
    tf.Session.run调用。 tf.Variable对象包装了这个操作,
    并可以像张量一样使用,它将读取当前的值
    储值。 tf.Variable对象也有类似的方法
    assign和assign_add,创建tf.Operation对象的时候
    执行,更新存储的值。 (有关更多信息,请参阅变量
    有关变量的信息。)

  4. Calling tf.train.Optimizer.minimize will add operations and tensors
    to the default graph that calculate gradients, and return a
    tf.Operation that, when run, will apply those gradients to a set of
    variables.
    调用tf.train.Optimizer.minimize将添加操作和张量
    到计算渐变的默认图形,并返回一个
    运行时,运行时会将这些梯度应用到一组
    变量。

Most programs rely solely on the default graph. However, see Dealing with multiple graphs for more advanced use cases. High-level APIs such as the tf.estimator.Estimator API manage the default graph on your behalf, and–for example–may create different graphs for training and evaluation.
大多数程序仅依赖于默认图形。但是,请参阅处理多个图表以获取更高级的用例。 tf.estimator.Estimator API等高级API代表您管理默认图形,例如,可以为训练和评估创建不同的图形。

Note: Calling most functions in the TensorFlow API merely adds operations and tensors to the default graph, but does not perform the actua

  • 0
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值