Tensorflow1.x与Tensorflow2.0的区别

来源:斯坦福大学cs231n

Historical background on TensorFlow 1.x

TF1.x的历史背景

TensorFlow 1.x is primarily a framework for working with static computational graphs. Nodes in the computational graph are Tensors which will hold n-dimensional arrays when the graph is run; edges in the graph represent functions that will operate on Tensors when the graph is run to actually perform useful computation.

Before Tensorflow 2.0, we had to configure the graph into two phases. There are plenty of tutorials online that explain this two-step process. The process generally looks like the following for TF 1.x:

  1. Build a computational graph that describes the computation that you want to perform. This stage doesn’t actually perform any computation; it just builds up a symbolic representation of your computation. This stage will typically define one or more placeholder objects that represent inputs to the computational graph.
  2. Run the computational graph many times. Each time the graph is run (e.g. for one gradient descent step) you will specify which parts of the graph you want to compute, and pass a feed_dict dictionary that will give concrete values to any placeholders in the graph.

TensorFlow 1.x主要是用于处理静态计算图的框架。计算图中的节点是Tensors,当图形运行时,它将保持n维数组;图中的边表示在运行图以实际执行有用计算时将在张量上运行的函数。

在Tensorflow 2.0之前,我们必须将图表分为两个阶段:

  1. 构建一个描述您要执行的计算的计算图。这个阶段实际上不执行任何计算;它只是建立了计算的符号表示。该阶段通常将定义一个或多个表示计算图输入的“占位符”(placeholder)对象。
  2. 多次运行计算图。 每次运行图形时(例如,对于一个梯度下降步骤),您将指定要计算的图形的哪些部分,并传递一个“feed_dict”字典,该字典将给出具体值为图中的任何“占位符”。

The new paradigm in Tensorflow 2.0

Tensorflow 2.0中的新范例

Now, with Tensorflow 2.0, we can simply adopt a functional form that is more Pythonic and similar in spirit to PyTorch and direct Numpy operation. Instead of the 2-step paradigm with computation graphs, making it (among other things) easier to debug TF code. You can read more details at https://www.tensorflow.org/guide/eager.

The main difference between the TF 1.x and 2.0 approach is that the 2.0 approach doesn’t make use of tf.Session, tf.run, placeholder, feed_dict. To get more details of what’s different between the two version and how to convert between the two, check out the official migration guide: https://www.tensorflow.org/alpha/guide/migration_guide

现在,使用Tensorflow 2.0,我们可以简单地采用"更像python"的功能形式,与PyTorch和Numpy操作直接相似。 而不是带有计算图的2步范例,使其(除其他事项外)更容易调试TF代码。 您可以在https://www.tensorflow.org/guide/eager上阅读更多详细信息。

TF 1.x和2.0方法的主要区别在于2.0方法不使用tf.Sessiontf.runplaceholderfeed_dict。 要了解两个版本之间的不同之处以及如何在两者之间进行转换的更多详细信息,请查看官方迁移指南:https://www.tensorflow.org/alpha/guide/migration_guide

一个简单的例子:flatten功能

tf1.x

def flatten(x):
    """    
    Input:
    - TensorFlow Tensor of shape (N, D1, ..., DM)
    
    Output:
    - TensorFlow Tensor of shape (N, D1 * ... * DM)
    """
    N = tf.shape(x)[0]
    return tf.reshape(x, (N, -1))
def test_flatten():
    # Clear the current TensorFlow graph.
    tf.reset_default_graph()
    
    # Stage I: Define the TensorFlow graph describing our computation.
    # In this case the computation is trivial: we just want to flatten
    # a Tensor using the flatten function defined above.
    
    # Our computation will have a single input, x. We don't know its
    # value yet, so we define a placeholder which will hold the value
    # when the graph is run. We then pass this placeholder Tensor to
    # the flatten function; this gives us a new Tensor which will hold
    # a flattened view of x when the graph is run. The tf.device
    # context manager tells TensorFlow whether to place these Tensors
    # on CPU or GPU.
    with tf.device(device):
        x = tf.placeholder(tf.float32)
        x_flat = flatten(x)
    
    # At this point we have just built the graph describing our computation,
    # but we haven't actually computed anything yet. If we print x and x_flat
    # we see that they don't hold any data; they are just TensorFlow Tensors
    # representing values that will be computed when the graph is run.
    print('x: ', type(x), x)
    print('x_flat: ', type(x_flat), x_flat)
    print()
    
    # We need to use a TensorFlow Session object to actually run the graph.
    with tf.Session() as sess:
        # Construct concrete values of the input data x using numpy
        x_np = np.arange(24).reshape((2, 3, 4))
        print('x_np:\n', x_np, '\n')
    
        # Run our computational graph to compute a concrete output value.
        # The first argument to sess.run tells TensorFlow which Tensor
        # we want it to compute the value of; the feed_dict specifies
        # values to plug into all placeholder nodes in the graph. The
        # resulting value of x_flat is returned from sess.run as a
        # numpy array.
        x_flat_np = sess.run(x_flat, feed_dict={x: x_np})
        print('x_flat_np:\n', x_flat_np, '\n')

        # We can reuse the same graph to perform the same computation
        # with different input data
        x_np = np.arange(12).reshape((2, 3, 2))
        print('x_np:\n', x_np, '\n')
        x_flat_np = sess.run(x_flat, feed_dict={x: x_np})
        print('x_flat_np:\n', x_flat_np)
test_flatten()

tf2.0

def flatten(x):
    """    
    Input:
    - TensorFlow Tensor of shape (N, D1, ..., DM)
    
    Output:
    - TensorFlow Tensor of shape (N, D1 * ... * DM)
    """
    N = tf.shape(x)[0]
    return tf.reshape(x, (N, -1))
def test_flatten():
    # Construct concrete values of the input data x using numpy
    x_np = np.arange(24).reshape((2, 3, 4))
    print('x_np:\n', x_np, '\n')
    # Compute a concrete output value.
    x_flat_np = flatten(x_np)
    print('x_flat_np:\n', x_flat_np, '\n')
test_flatten()
  • 14
    点赞
  • 28
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值