Tensorflow的Variable和get_variable

    tf创建变量主要有两种方式:tf.Variable()(V大写)、tf.get_variable()。

    1、tf.Variable()

    (1)每次调用得到的都是不同的变量,即使使用了相同的变量名,在底层实现的时候还是会为变量创建不同的别名

var1 = tf.Variable(name='var', initial_value=[2], dtype=tf.float32)
var2 = tf.Variable(name='var', initial_value=[2], dtype=tf.float32)
sess.run(tf.global_variables_initializer())
print var1.name, sess.run(var1)
print var2.name, sess.run(var2)

# 输出为:
# var:0 [ 2.]
# var_1:0 [ 2.]

    (2)会受tf.name_scope环境的影响,即会在前面加上name_scope的空间前缀

with tf.name_scope('var_a_scope'):
     var1 = tf.Variable(name='var', initial_value=[2], dtype=tf.float32)
     var2 = tf.Variable(name='var', initial_value=[2], dtype=tf.float32)
 
with tf.Session() as sess:
     sess.run(tf.global_variables_initializer())
     print var1.name, sess.run(var1)
     print var2.name, sess.run(var2)

#输出为:
#var_a_scope/var:0 [ 2.]
#var_a_scope/var_1:0 [ 2.]

    (3)Variable()创建时直接指定初始化的方式

    可以直接给定值和dtype,如上面的initial_value=[1],也可以传入一个随机数生成函数,或是其他的tensor,其中tf中的随机数生成函数主要有:

 

    固定值生成函数:


    还可以把其他变量的初始值作为初始值

var2=tf.Variable(var1.initialized_value())
    2、tf.get_variable()

    (1)只会创建一个同名变量,如果想共享变量,需指定reuse=True,否则多次创建会报错

import tensorflow as tf

def func(x):
    weight = tf.get_variable(tf.random_normal([1]),name="weight")
    bias = tf.get_variable(tf.zeros([1]), name="bias")
    return tf.add(tf.multiply(weight, x), bias)

with tf.Session() as sess:
    init = tf.global_variables_initializer()
    sess.run(init)
    result1 = func(1)
    result2 = func(2)
    print(sess.run(result1))
    print(sess.run(result2))

#报错:ValueError: Variable weight already exists, disallowed.

     使用reuse=True(第一次创建的时候不用,后面共享的时候声明)

with tf.variable_scope('foo'):
    v = tf.get_variable('v', [1])
with tf.variable_scope('foo', reuse=True):
    v1 = tf.get_variable('v')
print v.name
print v1.name
assert v == v1

       可以动态的修改某个scope的共享属性:

def my_image_filter(input_images):
    conv1_weights = tf.get_variable(name="conv1_weights", shape=[5, 5, 3, 3], dtype=tf.float32, \
                                    initializer=tf.truncated_normal_initializer())
    conv1_biases = tf.get_variable(name='conv1_biases', shape=[3], dtype=tf.float32, \
                                    initializer=tf.zeros_initializer())
    conv1 = tf.nn.conv2d(input_images, conv1_weights, strides=[1, 1, 1, 1], padding='SAME')
    return  tf.nn.relu(conv1 + conv1_biases)

image1 = np.random.random(3*5*5).reshape(1, 5, 5, 3).astype(np.float32)
image2 = np.random.random(3*5*5).reshape(1, 5, 5, 3).astype(np.float32)
with tf.variable_scope("image_filters") as scope:
    result1 = my_image_filter(image1)
    scope.reuse_variables() #or 
    # tf.get_variable_scope().reuse_variables()
    result2 = my_image_filter(image2)
    (2)不受with tf.name_scop的影响(注:是name_scop,不是variable_scop,tf.Variable和tf.get_variable都会受variable_scop影响)
with tf.name_scope('var_a_scope'):
     var1 = tf.get_variable('var1', [1])
print var1.name
#输出:
var1:0

    (3)创建时,通过传入initializer来指定初始化方式,可选方式与上节(3)中的类似,不过后面加_initialize后缀,且不用输入shape参数

conv1_weights = tf.get_variable(name="conv1_weights", shape=[5, 5, 3, 3], dtype=tf.float32, \
                                                                initializer=tf.truncated_normal_initializer())
conv1_biases = tf.get_variable(name='conv1_biases', shape=[3], dtype=tf.float32, initializer=tf.zeros_initializer())

    (4)with tf.variable_scope('scope_name")会进行“累加”,每调用一次就会给里面的所有变量添加一次前缀,叠加顺序是外层先调用的在前,后调用的在后

def my_image_filter(input_images):
    with tf.variable_scope('scope_a'):
        conv1_weights = tf.get_variable(name="conv1_weights", shape=[5, 5, 3, 3], dtype=tf.float32, \
                                        initializer=tf.truncated_normal_initializer())
        conv1_biases = tf.get_variable(name='conv1_biases', shape=[3], dtype=tf.float32, \
                                        initializer=tf.zeros_initializer())
        conv1 = tf.nn.conv2d(input_images, conv1_weights, strides=[1, 1, 1, 1], padding='SAME')
        print conv1_weights.name
        return  tf.nn.relu(conv1 + conv1_biases)

image1 = np.random.random(3*5*5).reshape(1, 5, 5, 3).astype(np.float32)
image2 = np.random.random(3*5*5).reshape(1, 5, 5, 3).astype(np.float32)
with tf.variable_scope("image_filters") as scope:
    result1 = my_image_filter(image1)
    print result1.name

#输出:
image_filters/scope_a/conv1_weights:0
image_filters/scope_a/Relu:0

    举个栗子:    

    首先定义一个conv_relu节点:

def conv_relu(input, kernel_shape, bias_shape):
    weights = tf.get_variable("weights", kernel_shape,
        initializer=tf.random_normal_initializer())
    biases = tf.get_variable("biases", bias_shape,
        initializer=tf.constant_initializer(0.0))
    conv = tf.nn.conv2d(input, weights,
        strides=[1, 1, 1, 1], padding='SAME')
    return tf.nn.relu(conv + biases)

    然后,定义一个由上面的节点组成的简单网络:(注意:此时每个节点赋予一个‘scope’,用以区分)

def my_image_filter(input_images):
    with tf.variable_scope("conv1"):
        # Variables created here will be named "conv1/weights", "conv1/biases".
        relu1 = conv_relu(input_images, [3, 3, 32, 32], [32])
    with tf.variable_scope("conv2"):
        # Variables created here will be named "conv2/weights", "conv2/biases".
        return conv_relu(relu1, [3, 3, 32, 32], [32])

    再然后,把上面的网络用以计算:

def compute_op():
     img1 = tf.Variable(tf.random_normal([1, 5, 5, 32]))
     img2 = tf.Variable(tf.random_normal([1, 5, 5, 32]))
     with tf.variable_scope('filter') as scope:
          result1 = my_image_filter(img1)
          scope.reuse_variables()
          result2 = my_image_filter(img2)
    这里第二次调用前就必须设置reuse_variables(),否则就会报错 “Variable conv1/weights already exists”

    

  • 0
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值