tensorflow学习——tf.layers.batch_normalization/tf.nn.batch_normalization/tf.contrib.layers.batch_norm

图片归一化方法:

import tensorflow as tf
a=tf.constant([1.,2.,3.,4.,7.,5.,8.,4.,6.],shape=(1,3,3,3)) 
a_mean, a_var = tf.nn.moments(a, axes=[1,2],keep_dims=True)
b=tf.rsqrt(a_var)
c=(a-a_mean)*b  
# c 和 d 算的结果相同,是对整个batch图片归一化
# 只有当batch_size为 1 时,结果才与e,f值相同
d=tf.nn.batch_normalization(a,a_mean,a_var,offset=None,scale=1,variance_epsilon=0)

% e和 f 算的结果相同,是对batch中的每张图片归一化
e=tf.layers.batch_normalization(a,training=True)
f=tf.contrib.layers.batch_norm(a,is_training=True)

sess = tf.Session()
sess.run(tf.global_variables_initializer())
mean,var=sess.run([a_mean,a_var])
a_value,b_value,c_value,d_value,e_value,f_value=sess.run([a,b,c,d,e,f])

sess.close()
import tensorflow as tf
import numpy as np
a=np.array([[5.,8.,2.],[7.,9.,1.]])
a=np.expand_dims(a,axis=0)

a=tf.constant(a,dtype=tf.float32) 
a_mean, a_var = tf.nn.moments(a, axes=[0,1],keep_dims=True)
b=tf.rsqrt(a_var)
c=(a-a_mean)*b
d=tf.layers.batch_normalization(a,training=True)
e=tf.nn.batch_normalization(a,a_mean,a_var,offset=None,scale=1,variance_epsilon=0)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
a_value,b_value,c_value,d_value,e_value=sess.run([a,b,c,d,e])
sess.close()
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值