ValueError: Variable train/tower_01/block_1/conv1_1/weights does not exist, or was not created with

报错
ValueError: Variable train/tower_01/block_1/conv1_1/weights does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=tf.AUTO_REUSE in VarScope?
报错信息
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-40-e649874e1c91> in <module>()
      1 config=tf.ConfigProto(allow_soft_placement=True)
      2                       #log_device_placement=True) # substantial output b/c big model
----> 3 multi_vgg = Multi_VGG_Model(4, config=config)
      4 multi_vgg

<ipython-input-39-7e0bd3c4d61e> in __init__(s, num_towers, config)
     17                                                 staircase=True)
     18                 s.opt = tf.train.AdamOptimizer(lr)
---> 19                 s.tower_grads = [s._build_tower('tower_{:02d}'.format(i), i==0) for i in range(num_towers)]
     20                 s.avg_gradients = process_gradient_list(s.tower_grads)
     21                 s.train = s.opt.apply_gradients(s.avg_gradients)

<ipython-input-39-7e0bd3c4d61e> in <listcomp>(.0)
     17                                                 staircase=True)
     18                 s.opt = tf.train.AdamOptimizer(lr)
---> 19                 s.tower_grads = [s._build_tower('tower_{:02d}'.format(i), i==0) for i in range(num_towers)]
     20                 s.avg_gradients = process_gradient_list(s.tower_grads)
     21                 s.train = s.opt.apply_gradients(s.avg_gradients)

<ipython-input-39-7e0bd3c4d61e> in _build_tower(s, name, first_tower, device)
     56                 print(s.image_batch)
     57                 print('11111111111111111')
---> 58                 logits = vgg_net(s.image_batch, 0.5)
     59                 print(logits)
     60 

<ipython-input-37-ea9fbae6f8b8> in vgg_net(images, keep_prob)
     38     # 第一个块结构,包括两个conv3-64
     39     with tf.name_scope('block_1'):
---> 40         conv1_1 = conv_op(images, filter_size=3, channel_out=64, step=1, name='conv1_1')
     41         conv1_2 = conv_op(conv1_1, filter_size=3, channel_out=64, step=1, name='conv1_2')
     42         pool1 = maxPool_op(conv1_2, filter_size=2, step=2, name='pooling_1')

<ipython-input-37-ea9fbae6f8b8> in conv_op(input_op, filter_size, channel_out, step, name)
     12         print(scope + 'weights')
     13         weights = tf.get_variable(shape=[filter_size, filter_size, channel_in, channel_out], dtype=tf.float32,
---> 14                                   initializer=xavier_initializer_conv2d(), name=scope + 'weights')
     15         biases = tf.Variable(tf.constant(value=0.0, shape=[channel_out], dtype=tf.float32),
     16                              trainable=True, name='biases')

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py in get_variable(name, shape, dtype, initializer, regularizer, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter, constraint, synchronization, aggregation)
   1494       constraint=constraint,
   1495       synchronization=synchronization,
-> 1496       aggregation=aggregation)
   1497 
   1498 

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py in get_variable(self, var_store, name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter, constraint, synchronization, aggregation)
   1237           constraint=constraint,
   1238           synchronization=synchronization,
-> 1239           aggregation=aggregation)
   1240 
   1241   def _get_partitioned_variable(self,

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py in get_variable(self, name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter, constraint, synchronization, aggregation)
    560           constraint=constraint,
    561           synchronization=synchronization,
--> 562           aggregation=aggregation)
    563 
    564   def _get_partitioned_variable(self,

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py in _true_getter(name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, use_resource, constraint, synchronization, aggregation)
    512           constraint=constraint,
    513           synchronization=synchronization,
--> 514           aggregation=aggregation)
    515 
    516     synchronization, aggregation, trainable = (

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py in _get_single_variable(self, name, shape, dtype, initializer, regularizer, partition_info, reuse, trainable, collections, caching_device, validate_shape, use_resource, constraint, synchronization, aggregation)
    880       raise ValueError("Variable %s does not exist, or was not created with "
    881                        "tf.get_variable(). Did you mean to set "
--> 882                        "reuse=tf.AUTO_REUSE in VarScope?" % name)
    883 
    884     # Create the tensor to initialize the variable with default value.

ValueError: Variable train/tower_01/block_1/conv1_1/weights does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=tf.AUTO_REUSE in VarScope?
解决方法思路 打印路径
print(scope + 'weights')
发现 train/tower_01/block_1/conv1_1/weights 这个路径不存在,然后溯源这个路径的出处,改成存在的路径就好了

发现 train/tower_01/block_1/conv1_1/weights 这个路径不存在,然后溯源这个路径的出处,改成存在的路径就好了

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值