tf.function和auto_graph

本文介绍了如何使用tf.function实现函数的多态性和静态图的重构,通过示例展示了如何避免追踪变量的共享,并利用get_concrete_function指定输入签名,确保只构建一个功能图。
摘要由CSDN通过智能技术生成

一个函数

def scaled_elu(z, scale=1.0, alpha=1.0):
    # z >= 0 ? scale * z : scale * alpha * tf.nn.elu(z)
    is_positive = tf.greater_equal(z, 0.0)
    return scale * tf.where(is_positive, z, alpha * tf.nn.elu(z))
scaled_elu_tf = tf.function(scaled_elu)

速度变快

函数的多态

@tf.function
def double(a):
print(‘追踪变量:’,a)
return a + a

print(‘结果:’,double(tf.constant(1)))
print()
print(‘结果:’,double(tf.constant(1.1)))
print()
print(‘结果:’,double(tf.constant(‘c’)))
print()
可以重构静态图的输入

创建一个新的tf.function。tf.function确保单独的对象不共享追踪。
使用该get_concrete_function方法获取特定追踪
指定input_signature何时调用tf.function以确保仅构建一个功能图。
double_strings = double.get_concrete_function(tf.TensorSpec(shape=None, dtype=tf.string))

 class MyModel(tf.keras.Model):
      def __init__(self, keep_probability=0.2):
        super(MyModel, self).__init__()
        self.dense1 = tf.keras.layers.Dense(4)
        self.dense2 = tf.keras.layers.Dense(5)
        self.keep_probability = keep_probability
    
      @tf.function
      def call(self, inputs, training=True):
        y = self.dense2(self.dense1(inputs))
        if training:
          return tf.nn.dropout(y, self.keep_probability)
        else:
          return y
    
    model = MyModel()
    model(x, training=True)  # executes a graph, with dropout
    model(x, training=False) # executes a graph, without dropout

Help on function function in module tensorflow.python.eager.def_function:

function(func=None, input_signature=None, autograph=True, experimental_autograph_options=None, experimental_relax_shapes=False)
Creates a callable TensorFlow graph from a Python function.

`function` constructs a callable that executes a TensorFlow graph
(`tf.Graph`) created by tracing the TensorFlow operations in `func`.
This allows the TensorFlow runtime to apply optimizations and exploit
parallelism in the computation defined by `func`.

_Example Usage_

```python
def f(x, y):
  return tf.reduce_mean(tf.multiply(x ** 2, 3) + y)

g = tf.function(f)

x = tf.constant([[2.0, 3.0]])
y = tf.constant([[3.0, -2.0]])

# `f` and `g` will return the same value, but `g` will be executed as a
# TensorFlow graph.
assert f(x, y).numpy() == g(x, y).numpy()

# Tensors and tf.Variables used by the Python function are captured in the
# graph.
@tf.function
def h():
  return f(x, y)

assert (h().numpy() == f(x, y).numpy()).all()

# Data-dependent control flow is also captured in the graph. Supported
# control flow statements include `if`, `for`, `while`, `break`, `continue`,
# `return`.
@tf.function
def g(x):
  if tf.reduce_sum(x) > 0:
    return x * x
  else:
    return -x // 2

# print and TensorFlow side effects are supported, but exercise caution when
# using Python side effects like mutating objects, saving to files, etc.
l = []

@tf.function
def g(x):
  for i in x:
    print(i)                              # Works
    tf.compat.v1.assign(v, i)                       # Works
    tf.compat.v1.py_func(lambda i: l.append(i))(i)  # Works
    l.append(i)                           # Caution! Doesn't work.
```

Note that unlike other TensorFlow operations, we don't convert python
numerical inputs to tensors. Moreover, a new graph is generated for each
distinct python numerical value, for example calling `g(2)` and `g(3)` will
generate two new graphs (while only one is generated if you call
`g(tf.constant(2))` and `g(tf.constant(3))`). Therefore, python numerical
inputs should be restricted to arguments that will have few distinct values,
such as hyperparameters like the number of layers in a neural network. This
allows TensorFlow to optimize each variant of the neural network.

_Referencing `tf.Variable`s_

The Python function `func` may reference stateful objects (such as
`tf.Variable`).
These are captured as implicit inputs to the callable returned by `function`.
For example:

```python
c = tf.Variable(0)

@tf.function
def f(x):
  c.assign_add(1)
  return x + tf.compat.v1.to_float(c)

assert int(c) == 0
assert f(1.0) == 2.0
assert int(c) == 1
assert f(1.0) == 3.0
assert int(c) == 2
```

`function` can be applied to methods of an object. For example:

```python
class Dense(object):
  def __init__(self):
    self.W = tf.Variable(tf.compat.v1.glorot_uniform_initializer()((10, 10)))
    self.b = tf.Variable(tf.zeros(10))

  @tf.function
  def compute(self, x):
    return tf.matmul(x, self.W) + self.b

d1 = Dense()
d2 = Dense()
x = tf.random.uniform((10, 10))
# d1 and d2 are using distinct variables
assert not (d1.compute(x).numpy() == d2.compute(x).numpy()).all()
```

_Usage with `tf.keras`_

The `call` methods of a `tf.keras.Model` subclass can be decorated with
`function` in order to apply graph execution optimizations on it.
For example:

```python
class MyModel(tf.keras.Model):
  def __init__(self, keep_probability=0.2):
    super(MyModel, self).__init__()
    self.dense1 = tf.keras.layers.Dense(4)
    self.dense2 = tf.keras.layers.Dense(5)
    self.keep_probability = keep_probability

  @tf.function
  def call(self, inputs, training=True):
    y = self.dense2(self.dense1(inputs))
    if training:
      return tf.nn.dropout(y, self.keep_probability)
    else:
      return y

model = MyModel()
model(x, training=True)  # executes a graph, with dropout
model(x, training=False) # executes a graph, without dropout
```

_Input Signatures_

`function` instantiates a separate graph for every unique set of input
shapes and datatypes. For example, the following code snippet will result
in three distinct graphs being traced, as each input has a different
shape.

```python
@tf.function
def f(x): return tf.add(x, 1.)

scalar = tf.constant(1.0)
vector = tf.constant([1.0, 1.0])
matrix = tf.constant([[3.0]])

f(scalar)
f(vector)
f(matrix)
```

An "input signature" can be optionally provided to `function` to control
the graphs traced. The input signature specifies the shape and type of each
`Tensor` argument to the function using a `tf.TensorSpec` object. For example,
the following code snippet ensures that a single graph is created where the
input `Tensor` is required to be a floating point tensor with no restrictions
on shape.

```python
@tf.function(input_signature=[tf.TensorSpec(shape=None, dtype=tf.float32)])
def f(x): return tf.add(x, 1.)
```

When an `input_signature` is specified, the callable will convert the inputs
to the specified TensorSpecs.

_Tracing and staging_

When `autograph` is `True`, all Python control flow that depends on `Tensor`
values is staged into a TensorFlow graph. When `autograph` is `False`, the
function is traced and control flow is not allowed to depend on data.

Note that `function` only stages TensorFlow operations, all Python code that
`func` executes and does not depend on data will shape the _construction_ of
the graph.
For example, consider the following:

```python
import numpy as np

def add_noise():
  return tf.eye(5) + np.random.randn(5, 5)

traced = tf.function(add_noise)
```

`add_noise()` will return a different output every time it is invoked.
However, `traced()` will return the same value every time it is called,
since a particular random value generated by the `np.random.randn` call will
be inserted in the traced/staged TensorFlow graph as a constant. In this
particular example, replacing `np.random.randn(5, 5)` with
`tf.random.normal((5, 5))` will result in the same behavior for `add_noise()`
and `traced()`.

_Python Side-Effects_

A corollary of the previous discussion on tracing is the following: If a
Python function `func` has Python side-effects, then executing `func` multiple
times may not be semantically equivalent to executing `F = tf.function(func)`
multiple times; this difference is due to the fact that `function` only
captures the subgraph of TensorFlow operations that is constructed when `func`
is invoked to trace a graph.

The same is true if code with Python side effects is used inside control flow,
such as a loop. If your code uses side effects that are not intended to
control graph construction, wrap them inside `tf.compat.v1.py_func`.

_Retracing_

A single tf.function object might need to map to multiple computation graphs
under the hood. This should be visible only as performance (tracing graphs has
a nonzero computational and memory cost) but should not affect the correctness
of the program. A traced function should return the same result as it would
when run eagerly, assuming no unintended Python side-effects.

Calling a `tf.function` with tensor arguments of different dtypes should lead
to at least one computational graph per distinct set of dtypes. Alternatively,
always calling a `tf.function` with tensor arguments of the same shapes and
dtypes and the same non-tensor arguments should not lead to additional
retracings of your function.

Other than that, TensorFlow reserves the right to retrace functions as many
times as needed, to ensure that traced functions behave as they would when run
eagerly and to provide the best end-to-end performance. For example, the
behavior of how many traces TensorFlow will do when the function is repeatedly
called with different python scalars as arguments is left undefined to allow
for future optimizations.

To control the tracing behavior, use the following tools:
 - different `tf.function` objects are guaranteed to not share traces; and
 - specifying a signature or using concrete function objects returned from
   get_concrete_function() guarantees that only one function graph will be
   built.

Args:
  func: function to be compiled. If `func` is None, returns a decorator that
    can be invoked with a single argument - `func`. The end result is
    equivalent to providing all the arguments up front. In other words,
    `tf.function(input_signature=...)(func)` is equivalent to
    `tf.function(func, input_signature=...)`. The former can be used to
    decorate Python functions, for example:
      @tf.function(input_signature=...)
      def foo(...): ...
  input_signature: A possibly nested sequence of `tf.TensorSpec` objects
    specifying the shapes and dtypes of the Tensors that will be supplied to
    this function. If `None`, a separate function is instantiated for each
    inferred input signature.  If input_signature is specified, every input to
    `func` must be a `Tensor`, and `func` cannot accept `**kwargs`.
  autograph: Whether autograph should be applied on `func` before tracing a
    graph. This allows for dynamic control flow (Python if's, loops etc.)
    in the traced graph. See https://www.tensorflow.org/guide/autograph for
      more information.
  experimental_autograph_options: Experimental knobs (in the form of a tuple
    of tensorflow.autograph.Feature values) to control behavior when
    autograph=True.
  experimental_relax_shapes: When true, argument shapes may be relaxed to
    avoid unecessary retracing.

Returns:
   If `func` is not None, returns a callable that will execute the compiled
   function (and return zero or more `tf.Tensor` objects).
   If `func` is None, returns a decorator that, when invoked with a single
   `func` argument, returns a callable equivalent to the case above.

Raises:
  TypeError: If `input_signature` is neither `None` nor a sequence of
    `TensorSpec` objects.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值