Tensorflow中计算图机制和常用函数笔记

0计算图机制

这里写图片描述

程序如下:

#basic computational graph
import numpy as np
np.random.seed(0)
import tensorflow as tf
N, D =3, 4

x=tf.placeholder(tf.float32)
y=tf.placeholder(tf.float32)
z=tf.placeholder(tf.float32)

a=x*y
b=a+z
c=tf.reduce_sum(b)
grad_x,grad_y,grad_z=tf.gradients(c,[x,y,z])
with tf.Session() as sess:
        values={
              x:np.random.randn(N,D),
              y:np.random.randn(N,D),
              z:np.random.randn(N,D),
        }
        out=sess.run([c,grad_x,grad_y,grad_z],feed_dict=values)
        c_val,grad_x_val,grad_y_val,grad_z_val=out
        print("c_val= %d"%(c_val))
        print("grad_x_val=")
        print(grad_x_val)

1 tf.layers.dense()全连接层

dense(inputs, units, activation=None, use_bias=True, kernel_initializer=None, bias_initializer=<tensorflow.python.ops.init_ops.Zeros object>, kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, trainable=True, name=None, reuse=None)
    Functional interface for the densely-connected layer.

    This layer implements the operation:
    `outputs = activation(inputs.kernel + bias)`
    Where `activation` is the activation function passed as the `activation`
    argument (if not `None`), `kernel` is a weights matrix created by the layer,
    and `bias` is a bias vector created by the layer
    (only if `use_bias` is `True`).

    Note: if the `inputs` tensor has a rank greater than 2, then it is
    flattened prior to the initial matrix multiply by `kernel`.

    Arguments:
      inputs: Tensor input.
      units: Integer or Long, dimensionality of the output space.
      activation: Activation function (callable). Set it to None to maintain a
        linear activation.
      use_bias: Boolean, whether the layer uses a bias.
      kernel_initializer: Initializer function for the weight matrix.
      bias_initializer: Initializer function for the bias.
      kernel_regularizer: Regularizer function for the weight matrix.
      bias_regularizer: Regularizer function for the bias.
      activity_regularizer: Regularizer function for the output.
      trainable: Boolean, if `True` also add variables to the graph collection
        `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).
      name: String, the name of the layer.
      reuse: Boolean, whether to reuse the weights of a previous layer
        by the same name.

    Returns:
      Output tensor.

2 tf.nn.tanh()

tanh(x, name=None)
    Computes hyperbolic tangent of `x` element-wise.//计算x的双曲正切值

    Args:
      x: A Tensor or SparseTensor with type `float`, `double`, `int32`,
        `complex64`, `int64`, or `qint32`.
      name: A name for the operation (optional).

    Returns:
      A Tensor or SparseTensor respectively with the same type as `x` if
      `x.dtype != qint32` otherwise the return type is `quint8`.

3 tf.reshape()

reshape(tensor, shape, name=None)
    Reshapes a tensor.

    Given `tensor`, this operation returns a tensor that has the same values
    as `tensor` with shape `shape`.

    If one component of `shape` is the special value -1, the size of that dimension
    is computed so that the total size remains constant.  In particular, a `shape`
    of `[-1]` flattens into 1-D.  At most one component of `shape` can be -1.

    If `shape` is 1-D or higher, then the operation returns a tensor with shape
    `shape` filled with the values of `tensor`. In this case, the number of elements
    implied by `shape` must be the same as the number of elements in `tensor`.

    For example:

    ```
    # tensor 't' is [1, 2, 3, 4, 5, 6, 7, 8, 9]
    # tensor 't' has shape [9]
    reshape(t, [3, 3]) ==> [[1, 2, 3],
                            [4, 5, 6],
                            [7, 8, 9]]

    # tensor 't' is [[[1, 1], [2, 2]],
    #                [[3, 3], [4, 4]]]
    # tensor 't' has shape [2, 2, 2]
    reshape(t, [2, 4]) ==> [[1, 1, 2, 2],
                            [3, 3, 4, 4]]

    # tensor 't' is [[[1, 1, 1],
    #                 [2, 2, 2]],
    #                [[3, 3, 3],
    #                 [4, 4, 4]],
    #                [[5, 5, 5],
    #                 [6, 6, 6]]]
    # tensor 't' has shape [3, 2, 3]
    # pass '[-1]' to flatten 't'
    reshape(t, [-1]) ==> [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6]

4 tf.layers.conv2d_transpose()

conv2d_transpose(inputs, filters, kernel_size, strides=(1, 1), padding='valid', data_format='channels_last', activation=None, use_bias=True, kernel_initializer=None, bias_initializer=<tensorflow.python.ops.init_ops.Zeros object>, kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, trainable=True, name=None, reuse=None)
    Functional interface for transposed 2D convolution layer.

    The need for transposed convolutions generally arises
    from the desire to use a transformation going in the opposite direction
    of a normal convolution, i.e., from something that has the shape of the
    output of some convolution to something that has the shape of its input
    while maintaining a connectivity pattern that is compatible with
    said convolution.

    Arguments:
      inputs: Input tensor.
      filters: Integer, the dimensionality of the output space (i.e. the number
        of filters in the convolution).
      kernel_size: A tuple or list of 2 positive integers specifying the spatial
        dimensions of of the filters. Can be a single integer to specify the same
        value for all spatial dimensions.
      strides: A tuple or list of 2 positive integers specifying the strides
        of the convolution. Can be a single integer to specify the same value for
        all spatial dimensions.
      padding: one of `"valid"` or `"same"` (case-insensitive).
      data_format: A string, one of `channels_last` (default) or `channels_first`.
        The ordering of the dimensions in the inputs.
        `channels_last` corresponds to inputs with shape
        `(batch, height, width, channels)` while `channels_first` corresponds to
        inputs with shape `(batch, channels, height, width)`.
      activation: Activation function. Set it to `None` to maintain a
        linear activation.
      use_bias: Boolean, whether the layer uses a bias.
      kernel_initializer: An initializer for the convolution kernel.
      bias_initializer: An initializer for the bias vector. If `None`, then no
        bias will be applied.
      kernel_regularizer: Optional regularizer for the convolution kernel.
      bias_regularizer: Optional regularizer for the bias vector.
      activity_regularizer: Regularizer function for the output.
      trainable: Boolean, if `True` also add variables to the graph collection

解卷积输入前后tensor的size计算方法如下:

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值