InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor ‘x_test

CNN-Bug记录

折磨了我好几天的bug…查了很多网上的对策,都和我的问题不一样。今天仿佛一道灵光闪现,似乎解决了

我的code:自制TFRecord格式的数据集(训练集和测试集)做回归拟合 (有时间会放出来,搜了半天也没找到回归拟合的tfrecord怎么做,网上都是图像识别、分类,我也是第一次学,憋着一口老血乱捣鼓一通,似乎成了)

1.bug的情况:

def train():
    """ define training data """
    with tf.name_scope('input'): 
    # 天真的我,思路如下:  x: 训练集的data,y:训练集的label; x_test:测试集的data,y_test:测试集的label
        x = tf.placeholder(tf.float32,[None,forward_parameters.image_size,forward_parameters.image_size2,forward_parameters.image_channels],'x_input')
        y = tf.placeholder(tf.float32,[None,forward_parameters.output_nodes],'y_input')
        global_step = tf.Variable(0,trainable=False)
        x_test = tf.placeholder(tf.float32, [None, forward_parameters.image_size, forward_parameters.image_size2,forward_parameters.image_channels], 'x_test_input')
        y_test = tf.placeholder(tf.float32, [None, forward_parameters.output_nodes], 'y_test_input')
        
    ......  # 中间是CNN的train code
    
    with tf.name_scope('eval_loss'):
        eval_result_accuracy = eval(x_test,y_test)   # 在这里调个eval,用测试集评估,forward参数是reuse=True
        tf.summary.scalar('eval_loss', eval_result_accuracy)

    

纠结的点在下面代码

# 初始化 Tensorflow 持久化类
    saver = tf.train.Saver()
    with tf.Session() as sess:
        tf.global_variables_initializer().run()
        merged = tf.summary.merge_all()
        writer = tf.summary.FileWriter("logs/",sess.graph)
        coord = tf.train.Coordinator() 
        threads = tf.train.start_queue_runners(coord=coord, sess=sess) 
        for i in range(forward_parameters.training_steps):
            start = (i * forward_parameters.batch_size) % forward_parameters.train_whole_sample_size
            end = min(start + forward_parameters.batch_size, forward_parameters.train_whole_sample_size)
            data,label = sess.run([data_train, label_train])
            test_data,test_label = sess.run([data_test,label_test])  # 这里run下测试集数据
            data_reshape=np.reshape(data,(-1,10,2401,3))
            label_reshape = np.reshape(label, (-1, 49))
            test_data_reshape = np.reshape(test_data, (-1, 10, 2401, 3))  # reshape一下
            test_label_reshape = np.reshape(test_label, (-1, 49))
            _,loss_value,step,accuracy_score=sess.run([train_op,loss,global_step,accuracy],feed_dict={x:data_reshape[start:end],y:label_reshape[start:end]})

            if i % 500 == 0:
                summary = sess.run(merged,feed_dict={x:data_reshape,y:label_reshape})
                writer.add_summary(summary,i)
                saver.save(sess,os.path.join(forward_parameters.model_save_path,forward_parameters.model_save_name),global_step)
                print("After %d training step(s), loss value is %f and accuracy is %f"%(step,loss_value,accuracy_score))
                
        # 这里就是这里!!!!老是报错!!!  我寻思明明run了以后还reshape,最后这里喂数据,为什么老是报错
        test_acc = sess.run(eval_result_accuracy,feed_dict={x:test_data_reshape,y:test_label_reshape}) 
        #test_acc = eval_result_accuracy.eval(sess.run,feed_dict={x_test:test_data_reshape,y_test:test_label_reshape})
        print("After %d training step(s), eval result accuracy is %f"%(test_acc))
        coord.request_stop()
        coord.join(threads)
        sess.close()

2.解决思路:

更令人摸不着头脑的是啊,它报错指引的位置是前面placeholder,百思不得其解的我头发掉了一斤后,把x_test,y_test去掉了,直接在调用eval(x_test,y_test) 时候把参数改为x,y;同时后面的喂数据改为 feed_dict={x:test_data_reshape, y:test_label_reshape}, 似乎bug成功解决… 给我整的二懵二懵的,就这?就这?

最后附一下我的 eval 吧,很简单

def eval(data,label):
    x = data
    test_y = label
    y = conv_forward(x,reuse=True)
    #y = conv_forward(x, reuse=tf.AUTO_REUSE)
    accuracy = tf.reduce_mean(tf.square(test_y - y))  
    return accuracy
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值