resnet152训练_Tensorflow使用的预训练的resnet_v2_50,resnet_v2_101,resnet_v2_152等模型预测,训练...

下面是以resnet_v2_101为例:此处将nets中的resnet_utils,合并一起了。resnet_v2.py#coding:utf-8#导入对应的库from __future__ import absolute_importfrom __future__ import divisionfrom __future__ import print_functionimport collect...
摘要由CSDN通过智能技术生成

下面是以resnet_v2_101为例:

此处将nets中的resnet_utils,合并一起了。

resnet_v2.py

#coding:utf-8

#导入对应的库

from __future__ import absolute_import

from __future__ import division

from __future__ import print_function

import collections

import tensorflow as tf

slim = tf.contrib.slim

class Block(collections.namedtuple('Block', ['scope', 'unit_fn', 'args'])):

"""A named tuple describing a ResNet block.

Its parts are:

scope: The scope of the `Block`.

unit_fn: The ResNet unit function which takes as input a `Tensor` and

returns another `Tensor` with the output of the ResNet unit.

args: A list of length equal to the number of units in the `Block`. The list

contains one (depth, depth_bottleneck, stride) tuple for each unit in the

block to serve as argument to unit_fn.

"""

def subsample(inputs, factor, scope=None):

"""Subsamples the input along the spatial dimensions.

Args:

inputs: A `Tensor` of size [batch, height_in, width_in, channels].

factor: The subsampling factor.

scope: Optional variable_scope.

Returns:

output: A `Tensor` of size [batch, height_out, width_out, channels] with the

input, either intact (if factor == 1) or subsampled (if factor > 1).

"""

if factor == 1:

return inputs

else:

return slim.max_pool2d(inputs, [1, 1], stride=factor, scope=scope)

def conv2d_same(inputs, num_outputs, kernel_size, stride, rate=1, scope=None):

"""Strided 2-D convolution with 'SAME' padding.

When stride > 1, then we do explicit zero-padding, followed by conv2d with

'VALID' padding.

Note that

net = conv2d_same(inputs, num_outputs, 3, stride=stride)

is equivalent to

net = slim.conv2d(inputs, num_outputs, 3, stride=1, padding='SAME')

net = subsample(net, factor=stride)

whereas

net = slim.conv2d(inputs, num_outputs, 3, stride=stride, padding='SAME')

is different when the input's height or width is even, which is why we add the

current function. For more details, see ResnetUtilsTest.testConv2DSameEven().

Args:

inputs: A 4-D tensor of size [batch, height_in, width_in, channels].

num_outputs: An integer, the number of output filters.

kernel_size: An int with the kernel_size of the filters.

stride: An integer, the output stride.

rate: An integer, rate for atrous convolution.

scope: Scope.

Returns:

output: A 4-D tensor of size [batch, height_out, width_out, channels] with

the convolution output.

"""

if stride == 1:

return slim.conv2d(inputs, num_outputs, kernel_size, stride=1, rate=rate,

padding='SAME', scope=scope)

else:

kernel_size_effective = kernel_size + (kernel_size - 1) * (rate - 1)

pad_total = kernel_size_effective - 1

pad_beg = pad_total // 2

pad_end = pad_total - pad_beg

inputs = tf.pad(inputs,

[[0, 0], [pad_beg, pad_end], [pad_beg, pad_end], [0, 0]])

return slim.conv2d(inputs, num_outputs, kernel_size, stride=stride,

rate=rate, padding='VALID', scope=scope)

@slim.add_arg_scope

def stack_blocks_dense(net, blocks, output_stride=None,

outputs_collections=None):

"""Stacks ResNet `Blocks` and controls output feature density.

First, this function creates scopes for the ResNet in the form of

'block_name/unit_1', 'block_name/unit_2', etc.

Second, this function allows the user to explicitly control the ResNet

output_stride, which is the ratio of the input to output spatial resolution.

This is useful for dense prediction tasks such as semantic segmentation or

object detection.

Most ResNets consist of 4 ResNet blocks and subsample the activations by a

factor of 2 when transitioning between consecutive ResNet blocks. This results

to a nominal ResNet output_stride equal to 8. If we set the output_stride to

half the nominal network stride (e.g., output_stride=4), then we compute

responses twice.

Control of the output feature density is implemented by atrous convolution.

Args:

net: A `Tensor` of size [batch, height, width, channels].

blocks: A list of length equal to the number of ResNet `Blocks`. Each

element is a ResNet `Block` object describing the units in the `Block`.

output_stride: If `None`, then the output will be computed at the nominal

network stride. If output_stride is not `None`, it specifies the requested

ratio of input to output spatial resolution, which needs to be equal to

the product of unit strides from the start up to some level of the ResNet.

For example, if the ResNet employs units with strides 1, 2, 1, 3, 4, 1,

then valid values for the output_stride are 1, 2, 6, 24 or None (which

is equivalent to output_stride=24).

outputs_collections: Collection to add the ResNet block outputs.

Returns:

net: Output tensor with stride equal to the specified output_stride.

Raises:

ValueError: If the target output_stride is not valid.

"""

# The current_stride variable keeps track of the effective stride of the

# activations. This allows us to invoke atrous convolution whenever applying

# the next residual unit would result in the activations having stride larger

# than the target output_stride.

current_stride = 1

# The atrous convolution rate parameter.

rate = 1

for block in blocks:

with tf.variable_scope(block.scope, 'block', [net]) as sc:

for i, unit in enumerate(block.args):

if output_stride is not None and current_stride > output_stride:

raise ValueError('The target output_stride cannot be reached.')

wit

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值