TensorFlow Building Graphs篇

本章是对TensorFlow中图构建的API文档的选摘翻译

Building Graphs

Core graph data structures 核心图数据结构

class tf.Graph

A TensorFlow computation, represented as a dataflow graph.
A Graph contains a set of Operation objects, which represent units of computation; and Tensor objects, which represent the units of data that flow between operations.
A default Graph is always registered, and accessible by calling tf.get_default_graph(). To add an operation to the default graph, simply call one of the functions that defines a new Operation:A TensorFlow computation, represented as a dataflow graph.

TF(以下TensorFlow简称为TF)计算都是通过数据流图(Graph)来展现的,一个数据流图包含一系列节点(OP)操作以及在节点之间流动的数据,这些节点和数据流分别称之为计算单元和Tensor对象。当进入TF时(例如import tensorflow as tf),TF内部会注册一个默认的Graph,可通过 tf.get_default_graph()  来获得这个默认的Default_Graph,只要简单地调用某个函数,就可以向这个默认的图里面添加操作(节点)。

c = tf.constant(4.0)
assert c.graph is tf.get_default_graph()

Another typical usage involves the Graph.as_default() context manager, which overrides the current default graph for the lifetime of the context:

另一种典型的操作是显式的引入上下文管理器,通过调用with Another_Graph.as_default() 来创建一个上下文环境,Another_Graph将会取代先前的Default_Graph,成为这个上下文环境作用域的默认图(流程图,以下简称图)

g = tf.Graph()
with g.as_default():
  # Define operations and tensors in `g`.
  c = tf.constant(30.0)
  assert c.graph is g

Important note: This class is not thread-safe for graph construction. All operations should be created from a single thread, or external synchronization must be provided. Unless otherwise specified, all methods are not thread-safe.

重要提示:Graph类在构建图时非线程安全。所有的节点(操作)必须在单线程内创建或者必须提供或外部同步。除非另有规定,不然所有的方法都不是线程安全的。

tf.Graph.as_default()

Returns a context manager that makes this Graph the default graph.

This method should be used if you want to create multiple graphs in the same process. For convenience, a global default graph is provided, and all ops will be added to this graph if you do not create a new graph explicitly. Use this method with the with keyword to specify that ops created within the scope of a block should be added to this graph.

The default graph is a property of the current thread. If you create a new thread, and wish to use the default graph in that thread, you must explicitly add a with g.as_default(): in that thread's function.

The following code examples are equivalent:

该方法返回一个上下文管理器,并将Graph当做默认图。若想在同一进程中创建多个图,可调用此方法。为了方便,TF一开始就提供了一个全局缺省图,所有节点将被添加到这个缺省图,如果没有显式地创建一个新的图的话。

默认图是当前线程的属性。如果您想创建一个新的线程,并希望在新线程使用默认的图,就必须明确使用g.as_default()搭配with关键字来创建一个新作用域,并在该新作用域内执行一系列节点

# 1. Using Graph.as_default():
g = tf.Graph()
with g.as_default():
  c = tf.constant(5.0)
  assert c.graph is g

# 2. Constructing and making default:
with tf.Graph().as_default() as g:
  c = tf.constant(5.0)
  assert c.graph is g
tf.Graph.as_graph_def(from_version=None, add_shapes=False)

Returns a serialized GraphDef representation of this graph.

The serialized GraphDef can be imported into another Graph (using import_graph_def()) or used with theC++ Session API.

This method is thread-safe.

该方法返回一个序列化的GraphDef。可在另一个图中调用该序列化的GraphDef(通过 import_graph_def())或者C++ Session API.

该方法是线程安全的。

tf.Graph.finalize()
Finalizes this graph, making it read-only.
After calling g.finalize(), no new operations can be added to g. This method is used to ensure that no operations are added to a graph when it is shared between multiple threads, for example when using a QueueRunner.

使用该方法后,后续节点(操作)不能再添加到改图(图结构被锁定了)。该方法可以确保此图在不同线程之间计算时,不会再被添加额外的节点。使用场景有QueueRunner(多线程读取队列文件)

tf.Graph.finalized

True if this graph has been finalized.

若图锁定,就返回True

tf.Graph.control_dependencies(control_inputs)

Returns a context manager that specifies control dependencies.

Use with the with keyword to specify that all operations constructed within the context should have control dependencies on control_inputs. For example:

返回一个指定依赖的上下文管理器。通过WIth关键字创建上下文环境,并传入依赖参数,上下文中的节点操作必须在依赖执行后才能执行。

with g.control_dependencies([a, b, c]):
  # `d` and `e` will only run after `a`, `b`, and `c` have executed.
  d = ...
  e = ...

Multiple calls to control_dependencies() can be nested, and in that case a new Operation will have control dependencies on the union of control_inputs from all active contexts.

该方法可以嵌套使用,最里层的节点必须等待上层所有的依赖节点执行后,才能被执行。

with g.control_dependencies([a, b]):
  # Ops constructed here run after `a` and `b`.
  with g.control_dependencies([c, d]):
    # Ops constructed here run after `a`, `b`, `c`, and `d`.

You can pass None to clear the control dependencies:

当然也可以将某层对的控制依赖参数设为None,这样的话该层(2)上下文里面的节点就没有了依赖,就可以随意执行。而且下一层(3)的上下文里面的节点此时只依赖于其所在层的依赖参数(c,d),不再跟其上面所有层的控制参数有关了。

with g.control_dependencies([a, b]):#1
  # Ops constructed here run after `a` and `b`.
  with g.control_dependencies(None):#2
    # Ops constructed here run normally, not waiting for either `a` or `b`.
    with g.control_dependencies([c, d]):#3
      # Ops constructed here run after `c` and `d`, also not waiting
      # for either `a` or `b`.

N.B. The control dependencies context applies only to ops that are constructed within the context. Merely using an op or tensor in the context does not add a control dependency. The following example illustrates this point:

只有上下文里面有实际节点(操作)时,上下文控制参数才起作用。

# WRONG
def my_func(pred, tensor):
  t = tf.matmul(tensor, tensor)
  with tf.control_dependencies([pred]):
    # The matmul op is created outside the context, so no control
    # dependency will be added.
    return t

# RIGHT
def my_func(pred, tensor):
  with tf.control_dependencies([pred]):
    # The matmul op is created in the context, so a control dependency
    # will be added.
    return tf.matmul(tensor, tensor)

tf.Graph.device(device_name_or_function)
返回值定硬件运行的图

Returns a context manager that specifies the default device to use.

The device_name_or_function argument may either be a device name string, a device function, or None:

  • If it is a device name string, all operations constructed in this context will be assigned to the device with that name, unless overridden by a nested device() context.
  • If it is a function, it will be treated as a function from Operation objects to device name strings, and invoked each time a new Operation is created. The Operation will be assigned to the device with the returned name.
  • If it is None, all device() invocations from the enclosing context will be ignored.

For information about the valid syntax of device name strings, see the documentation in DeviceNameUtils.

For example:

with g.device('/gpu:0'):
  # All operations constructed in this context will be placed
  # on GPU 0.
  with g.device(None):
    # All operations constructed in this context will have no
    # assigned device.

# Defines a function from `Operation` to device string.
def matmul_on_gpu(n):
  if n.type == "MatMul":
    return "/gpu:0"
  else:
    return "/cpu:0"

with g.device(matmul_on_gpu):
  # All operations of type "MatMul" constructed in this context
  # will be placed on GPU 0; all other operations will be placed
  # on CPU 0.

N.B. The device scope may be overridden by op wrappers or other library code. For example, a variable assignment op v.assign() must be colocated with the tf.Variable v, and incompatible device scopes will be ignored.

tf.Graph.name_scope(name)

Returns a context manager that creates hierarchical names for operations.

A graph maintains a stack of name scopes. A with name_scope(...): statement pushes a new name onto the stack for the lifetime of the context.

返回节点具有特定名称的上下文管理器,此方法会为该上下文里面的节点设置一个特殊的名字

The name argument will be interpreted as follows:

  • A string (not ending with '/') will create a new name scope, in which name is appended to the prefix of all operations created in the context. If name has been used before, it will be made unique by callingself.unique_name(name).
  • A scope previously captured from a with g.name_scope(...) as scope: statement will be treated as an "absolute" name scope, which makes it possible to re-enter existing scopes.
  • A value of None or the empty string will reset the current name scope to the top-level (empty) name scope.

For example:注意with关键字嵌套使用,该方法的作用主要是命名可以重复,但是同样的命名其所处的上下文执行空间是不一样的

with tf.Graph().as_default() as g:
  c = tf.constant(5.0, name="c")
  assert c.op.name == "c"
  c_1 = tf.constant(6.0, name="c")
  assert c_1.op.name == "c_1"

  # Creates a scope called "nested"
  with g.name_scope("nested") as scope:
    nested_c = tf.constant(10.0, name="c")
    assert nested_c.op.name == "nested/c"

    # Creates a nested scope called "inner".
    with g.name_scope("inner"):
      nested_inner_c = tf.constant(20.0, name="c")
      assert nested_inner_c.op.name == "nested/inner/c"

    # Create a nested scope called "inner_1".
    with g.name_scope("inner"):
      nested_inner_1_c = tf.constant(30.0, name="c")
      assert nested_inner_1_c.op.name == "nested/inner_1/c"

      # Treats `scope` as an absolute name scope, and
      # switches to the "nested/" scope.
      with g.name_scope(scope):
        nested_d = tf.constant(40.0, name="d")
        assert nested_d.op.name == "nested/d"

        with g.name_scope(""):
          e = tf.constant(50.0, name="e")
          assert e.op.name == "e"

....该Graph类下其他方法就不复述了

class tf.Operation

Represents a graph node that performs computation on tensors.

An Operation is a node in a TensorFlow Graph that takes zero or more Tensor objects as input, and produces zero or more Tensor objects as output. Objects of type Operation are created by calling a Python op constructor (such as tf.matmul()) or Graph.create_op().

For example c = tf.matmul(a, b) creates an Operation of type "MatMul" that takes tensors a and b as input, and produces c as output.

After the graph has been launched in a session, an Operation can be executed by passing it to Session.run().

op.run() is a shortcut for calling tf.get_default_session().run(op).

每个Operation就是一个图中的一个节点,其输入包含零个或多个Tensor对象,输出零个或者多个对象,通过op构造器就可以创建节点,比如一个图中,一个乘法运算就是一个节点。 通过Session.run(op)或者op.run()来执行节点的运算操作

class tf.Tensor  张量

Represents one of the outputs of an Operation.

Tensor可以了理解为节点里面的输出对象(在Python里面一切皆对象,在TF里面是一切变量(包括常量,占位符)皆Tensor),

Note:  the  Tensor  class will be replaced by  Output  in the future. Currently these two are aliases for each other.

Tensor is a symbolic handle to one of the outputs of an Operation. It does not hold the values of that operation's output, but instead provides a means of computing those values in a TensorFlow Session.

Tensor类和Output类作用相同。每个Tensor都是节点输出对象的指针,其中并没有存储输出对象的结果,相反其储存的是一些得到输出结果的计算方法。

This class has two primary purposes:

  1. Tensor can be passed as an input to another Operation. This builds a dataflow connection between operations, which enables TensorFlow to execute an entire Graph that represents a large, multi-step computation.

  2. After the graph has been launched in a session, the value of the Tensor can be computed by passing it toSession.run()t.eval() is a shortcut for calling tf.get_default_session().run(t).

在图中,一个Tensor可以作为输入传递到另一个节点。这就相当于在节点间构建了数据流连接操作,这样接可以让TF会话在图中来执行一系列的计算操作,当图在会话中启动后,Tensor的值可以通过Session.run(Tensor)来得到。tf.get_default_session().run()和t.eval()等效
下面代码中的c,d,e都是Tensor
# Build a dataflow graph.
c = tf.constant([[1.0, 2.0], [3.0, 4.0]])
d = tf.constant([[1.0, 1.0], [0.0, 1.0]])
e = tf.matmul(c, d)

# Construct a `Session` to execute the graph.
sess = tf.Session()

# Execute the graph and store the value that `e` represents in `result`.
result = sess.run(e)

一些主要方法:
tf.Tensor.dtype

The DType of elements in this tensor.

tf.Tensor.name

The string name of this tensor.

tf.Tensor.value_index

The index of this tensor in the outputs of its Operation.

tf.Tensor.graph

The Graph that contains this tensor.

tf.Tensor.op

The Operation that produces this tensor as an output.

tf.Tensor.consumers()

Returns a list of Operations that consume this tensor.

tf.Tensor.get_shape()

Returns the TensorShape that represents the shape of this tensor.

c = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])

print(c.get_shape())
==> TensorShape([Dimension(2), Dimension(3)])

d = tf.constant([[1.0, 0.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0]])

print(d.get_shape())
==> TensorShape([Dimension(4), Dimension(2)])

# Raises a ValueError, because `c` and `d` do not have compatible
# inner dimensions.
e = tf.matmul(c, d)

f = tf.matmul(c, d, transpose_a=True, transpose_b=True)

print(f.get_shape())
==> TensorShape([Dimension(3), Dimension(4)])

Tensor types 张量类型

class tf.DType

Represents the type of the elements in a Tensor.

The following DType objects are defined:

  • tf.float16: 16-bit half-precision floating-point.
  • tf.float32: 32-bit single-precision floating-point.
  • tf.float64: 64-bit double-precision floating-point.
  • tf.bfloat16: 16-bit truncated floating-point.
  • tf.complex64: 64-bit single-precision complex.
  • tf.complex128: 128-bit double-precision complex.
  • tf.int8: 8-bit signed integer.
  • tf.uint8: 8-bit unsigned integer.
  • tf.uint16: 16-bit unsigned integer.
  • tf.int16: 16-bit signed integer.
  • tf.int32: 32-bit signed integer.
  • tf.int64: 64-bit signed integer.
  • tf.bool: Boolean.
  • tf.string: String.
  • tf.qint8: Quantized 8-bit signed integer.
  • tf.quint8: Quantized 8-bit unsigned integer.
  • tf.qint16: Quantized 16-bit signed integer.
  • tf.quint16: Quantized 16-bit unsigned integer.
  • tf.qint32: Quantized 32-bit signed integer.

tf.as_dtype(type_value)

Converts the given type_value to a DType


其他一些常用主要方法

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值