tensorflow之API函数1

1. tf.nn.conv2d

tf.nn.conv2d(
    input,
    filter,
    strides,
    padding,
    use_cudnn_on_gpu=True,
    data_format='NHWC',
    dilations=[1, 1, 1, 1],
    name=None
)
  • input: 指需要做卷积的输入图像,它要求是一个Tensor,具有[batch, in_height, in_width, in_channels]这样的shape,具体含义是[训练时一个batch的图片数量, 图片高度, 图片宽度, 图像通道数],注意这是一个4维的Tensor,要求类型为float32和float64其中之一
  • filter:相当于CNN中的卷积核,它要求是一个Tensor,具有[filter_height, filter_width, in_channels, out_channels]这样的shape,具体含义是[卷积核的高度,卷积核的宽度,图像通道数,卷积核个数],要求类型与参数input相同,有一个地方需要注意,第三维in_channels,就是参数input的第四维
  • strides:卷积时在图像每一维的步长,这是一个一维的向量,长度4
  • padding:string类型的量,只能是"SAME","VALID"其中之一,这个值决定了不同的卷积方式
  • use_cudnn_on_gpu:bool类型,是否使用cudnn加速,默认为true

结果返回一个Tensor,这个输出,就是我们常说的feature map,shape仍然是[batch, height, width, channels]这种形式。

 

2. tf.nn.max_pool

tf.nn.max_pool(
    value,
    ksize,
    strides,
    padding,
    data_format='NHWC',
    name=None
)
  •  value:需要池化的输入,一般池化层接在卷积层后面,所以输入通常是feature map,依然是[batch, height, width, channels]这样的shape
  • ksize:池化窗口的大小,取一个四维向量,一般是[1, height, width, 1],因为我们不想在batch和channels上做池化,所以这两个维度设为了1
  • strides:和卷积类似,窗口在每一个维度上滑动的步长,一般也是[1, stride,stride, 1]
  • padding:和卷积类似,可以取'VALID' 或者'SAME'

返回一个Tensor,类型不变,shape仍然是[batch, height, width, channels]这种形式

 

3.tf.nn.relu(Z1)

tf.nn.relu(
    features,
    name=None
)
  • features: A Tensor. Must be one of the following types: float32float64int32uint8int16int8int64bfloat16uint16halfuint32uint64qint8.
  • name: A name for the operation (optional).

tf.nn.relu()函数是将大于0的数保持不变,小于0的数置为0

Returns:

Tensor. Has the same type as features.

 

4.tf.contrib.layers.flatten(P)

tf.contrib.layers.flatten(
    inputs,
    outputs_collections=None,
    scope=None
)

 

Flattens the input while maintaining the batch_size.

Assumes that the first dimension represents the batch.

  • inputs: A tensor of size [batch_size, ...].
  • outputs_collections: Collection to add the outputs.
  • scope: Optional scope for name_scope.

Returns:

A flattened tensor with shape [batch_size, k].

给定一个输入P,此函数将会把每个样本转化成一维的向量,然后返回一个tensor变量,其维度为(batch_size,k)

 把P保留第一个维度,把第一个维度包含的每一子张量展开成一个行向量,返回张量是一个二维的, shape=(batch_size,….),一般用于卷积神经网络全链接层前的预处理 

 

5.tf.contrib.layers.fully_connected

tf.contrib.layers.fully_connected(
    inputs,
    num_outputs,
    activation_fn=tf.nn.relu,
    normalizer_fn=None,
    normalizer_params=None,
    weights_initializer=initializers.xavier_initializer(),
    weights_regularizer=None,
    biases_initializer=tf.zeros_initializer(),
    biases_regularizer=None,
    reuse=None,
    variables_collections=None,
    outputs_collections=None,
    trainable=True,
    scope=None
)

 

Adds a fully connected layer.

fully_connected creates a variable called weights, representing a fully connected weight matrix, which is multiplied by the inputs to produce a Tensor of hidden units. If a normalizer_fn is provided (such as batch_norm), it is then applied. Otherwise, if normalizer_fn is None and a biases_initializer is provided then a biases variable would be created and added the hidden units. Finally, if activation_fn is not None, it is applied to the hidden units as well.

  • inputs: A tensor of at least rank 2 and static value for the last dimension; i.e. [batch_size, depth][None, None, None, channels]
  • num_outputs: Integer or long, the number of output units in the layer.
  • activation_fn: Activation function. The default value is a ReLU function. Explicitly set it to None to skip it and maintain a linear activation.
  • normalizer_fn: Normalization function to use instead of biases. If normalizer_fn is provided then biases_initializer and biases_regularizer are ignored and biases are not created nor added. default set to None for no normalizer function
  • normalizer_params: Normalization function parameters.
  • weights_initializer: An initializer for the weights.
  • weights_regularizer: Optional regularizer for the weights.
  • biases_initializer: An initializer for the biases. If None skip biases.
  • biases_regularizer: Optional regularizer for the biases.
  • reuse: Whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given.
  • variables_collections: Optional list of collections for all the variables or a dictionary containing a different list of collections per variable.
  • outputs_collections: Collection to add the outputs.
  • trainable: If True also add variables to the graph collectionGraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
  • scope: Optional scope for variable_scope.

Returns:

The tensor variable representing the result of the series of operations.

 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值