python moving average_Python moving_averages.assign_moving_average方法代码示例

本文详述了Python中TensorFlow库的`moving_averages.assign_moving_average`方法,提供了多个实际应用代码示例,涵盖在批处理归一化(batch normalization)、移动平均更新以及自适应最大范数等方面的使用。这些示例展示了如何在训练和非训练阶段使用该方法来更新变量的移动平均值。
摘要由CSDN通过智能技术生成

本文整理汇总了Python中tensorflow.python.training.moving_averages.assign_moving_average方法的典型用法代码示例。如果您正苦于以下问题:Python moving_averages.assign_moving_average方法的具体用法?Python moving_averages.assign_moving_average怎么用?Python moving_averages.assign_moving_average使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在模块tensorflow.python.training.moving_averages的用法示例。

在下文中一共展示了moving_averages.assign_moving_average方法的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: create_and_apply_batch_norm

​点赞 6

# 需要导入模块: from tensorflow.python.training import moving_averages [as 别名]

# 或者: from tensorflow.python.training.moving_averages import assign_moving_average [as 别名]

def create_and_apply_batch_norm(self, inp, n_features, decay, tower_setup, scope_name="bn"):

beta, gamma, moving_mean, moving_var = create_batch_norm_vars(n_features, tower_setup, scope_name)

self.n_params += 2 * n_features

if tower_setup.is_main_train_tower:

assert tower_setup.is_training

if tower_setup.is_training and not tower_setup.freeze_batchnorm:

xn, batch_mean, batch_var = tf.nn.fused_batch_norm(inp, gamma, beta, epsilon=Layer.BATCH_NORM_EPSILON,

is_training=True)

if tower_setup.is_main_train_tower:

update_op1 = moving_averages.assign_moving_average(

moving_mean, batch_mean, decay, zero_debias=False, name='mean_ema_op')

update_op2 = moving_averages.assign_moving_average(

moving_var, batch_var, decay, zero_debias=False, name='var_ema_op')

self.update_ops.append(update_op1)

self.update_ops.append(update_op2)

return xn

else:

xn = tf.nn.batch_normalization(inp, moving_mean, moving_var, beta, gamma, Layer.BATCH_NORM_EPSILON)

return xn

开发者ID:tobiasfshr,项目名称:MOTSFusion,代码行数:21,

示例2: moving_average_update

​点赞 6

# 需要导入模块: from tensorflow.python.training import moving_averages [as 别名]

# 或者: from tensorflow.python.training.moving_averages import assign_moving_average [as 别名]

def moving_average_update(x, value, momentum):

"""Compute the moving average of a variable.

# Arguments

x: A `Variable`.

value: A tensor with the same shape as `x`.

momentum: The moving average momentum.

# Returns

An operation to update the variable.

"""

return moving_averages.assign_moving_average(

x, value, momentum, zero_debias=True)

# LINEAR ALGEBRA

开发者ID:Relph1119,项目名称:GraphicDesignPatternByPython,代码行数:18,

示例3: _adaptive_max_norm

​点赞 6

# 需要导入模块: from tensorflow.python.training import moving_averages [as 别名]

# 或者: from tensorflow.python.training.moving_averages import assign_moving_average [as 别名]

def _adaptive_max_norm(norm, std_factor, decay, global_step, epsilon, name):

"""Find max_norm given norm and previous average."""

with vs.variable_scope(name, "AdaptiveMaxNorm", [norm]):

log_norm = math_ops.log(norm + epsilon)

def moving_average(name, value, decay):

moving_average_variable = vs.get_variable(

name, shape=value.get_shape(), dtype=value.dtype,

initializer=init_ops.zeros_initializer, trainable=False)

return moving_averages.assign_moving_average(

moving_average_variable, value, decay, zero_debias=False)

# quicker adaptation at the beginning

if global_step is not None:

n = math_ops.to_float(global_step)

decay = math_ops.minimum(decay, n / (n + 1.))

# update averages

mean = moving_average("mean", log_norm, decay)

sq_mean = moving_average("sq_mean", math_ops.square(log_norm), decay)

variance = sq_mean - math_ops.square(mean)

std = math_ops.sqrt(math_ops.maximum(epsilon, variance))

max_norms = math_ops.exp(mean + std_factor*std)

return max_norms, mean

开发者ID:tobegit3hub,项目名称:deep_image_model,代码行数:27,

示例4: batch_normalization

​点赞 6

# 需要导入模块: from tensorflow.python.training import moving_averages [as 别名]

# 或者: from tensorflow.python.training.moving_averages import assign_moving_average [as 别名]

def batch_normalization(incoming, is_training, beta=0.0, gamma=1.0, epsilon=1e-5, decay=0.9):

shape = incoming.get_shape()

dimensions_num = len(shape)

axis = list(range(dimensions_num - 1))

with tf.variable_scope('batchnorm'):

beta = tf.Variable(initial_value=tf.ones(shape=[shape[-1]]) * beta, name='beta')

gamma = tf.Variable(initial_value=tf.ones(shape=[shape[-1]]) * gamma, name='gamma')

moving_mean = tf.Variable(initial_value=tf.zeros(shape=shape[-1:]), trainable=False, name='moving_mean')

moving_variance = tf.Variable(initial_value=tf.zeros(shape=shape[-1:]), trainable=False, name='moving_variance')

def update_mean_var():

mean, variance = tf.nn.moments(incoming, axis)

update_moving_mean = moving_averages.assign_moving_average(moving_mean, mean, decay)

update_moving_variance = moving_averages.assign_moving_average(moving_variance, variance, decay)

with tf.control_dependencies([update_moving_mean, update_moving_variance]):

return tf.identity(mean), tf.identity(variance)

mean, var = tf.cond(is_training, update_mean_var, lambda: (moving_mean, moving_variance))

inference = tf.nn.batch_normalization(incoming, mean, var, beta, gamma, epsilon)

inference.set_shape(shape)

return inference

开发者ID:maxim5,项目名称:time-series-machine-learning,代码行数:25,

示例5: batch_normalization

​点赞 6

# 需要导入模块: from tensorflow.python.training import moving_averages [as 别名]

# 或者: from tensorflow.python.training.moving_averages import assign_moving_av

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值