改造tensorflow的cifar10数据集 为 rnn_lstm网络

碰到这个 报错 typeError: Cannot interpret feed_dict key as Tensor: Can not convert a Int into a Tensor
报错原因:
卡了我两天 定义的 常规变量 不要和tensor变量重名 batch_size 和整形 batch_size 重名了 运行初始化 定义时 运行的是 plachholer
一万只草拟吗 在心中飘过
这篇博主 抄过来的

https://blog.csdn.net/qq_35014850/article/details/82556506

代码如下 :
epoch_list=[]
accuracy_list=[]
loss_list = []

lr = 1e-3

每个时刻的输入特征是28维的,就是每个时刻输入一行,一行有 28 个像素

input_size = 32

时序持续长度为28,即每做一次预测,需要先输入28行

timestep_size = 32

每个隐含层的节点数

hidden_size = 256

LSTM layer 的层数

layer_num = 2

最后输出分类类别数量,如果是回归预测的话应该是 1

class_num = 10

keep_prob = tf.placeholder(tf.float32, [])

_batch_size = tf.placeholder(tf.int32, [])

x = tf.placeholder(tf.float32, [None, 784])

x = tf.placeholder(‘float’,shape=[None,32,32],name=‘x’)
y = tf.placeholder(tf.float32, [None, class_num],name=‘y’)

##########################################################################搭建LSTM模型

把784个点的字符信息还原成 28 * 28 的图片

下面几个步骤是实现 RNN / LSTM 的关键

####################################################################

**步骤1:RNN 的输入shape = (batch_size, timestep_size, input_size)

x = tf.reshape(x, [-1, 32, 32])

stacked_rnn = []
for iiLyr in range(layer_num):
stacked_rnn.append(tf.nn.rnn_cell.LSTMCell(num_units=hidden_size, state_is_tuple=True))
mlstm_cell = tf.nn.rnn_cell.MultiRNNCell(cells=stacked_rnn, state_is_tuple=True)

init_state = mlstm_cell.zero_state(_batch_size, dtype=tf.float32)

**步骤6:方法一,调用 dynamic_rnn() 来让我们构建好的网络运行起来

** 当 time_major==False 时, outputs.shape = [batch_size, timestep_size, hidden_size]

** 所以,可以取 h_state = outputs[:, -1, :] 作为最后输出

** state.shape = [layer_num, 2, batch_size, hidden_size],

** 或者,可以取 h_state = state[-1][1] 作为最后输出

** 最后输出维度是 [batch_size, hidden_size]

outputs, state = tf.nn.dynamic_rnn(mlstm_cell, inputs=X, initial_state=init_state, time_major=False)

h_state = outputs[:, -1, :] # 或者 h_state = state[-1][1]

*************** 为了更好的理解 LSTM 工作原理,我们把上面 步骤6 中的函数自己来实现 ***************

通过查看文档你会发现, RNNCell 都提供了一个 call()函数(见最后附),我们可以用它来展开实现LSTM按时间步迭代。

**步骤6:方法二,按时间步展开计算

outputs = list()
state = init_state
with tf.variable_scope(‘RNN’):
for timestep in range(timestep_size):
if timestep > 0:
tf.get_variable_scope().reuse_variables()
# 这里的state保存了每一层 LSTM 的状态
(cell_output, state) = mlstm_cell(x[:, timestep, : ], state)
outputs.append(cell_output)
h_state = outputs[-1]

上面 LSTM 部分的输出会是一个 [hidden_size] 的tensor,我们要分类的话,还需要接一个 softmax 层

首先定义 softmax 的连接权重矩阵和偏置

out_W = tf.placeholder(tf.float32, [hidden_size, class_num], name=‘out_Weights’)

out_bias = tf.placeholder(tf.float32, [class_num], name=‘out_bias’)

开始训练和测试

W = tf.Variable(tf.truncated_normal([hidden_size, class_num], stddev=0.1), dtype=tf.float32)
bias = tf.Variable(tf.constant(0.1,shape=[class_num]), dtype=tf.float32)
y_pre = tf.nn.softmax(tf.matmul(h_state, W) + bias)

def get_train_batch(number,batch_size):

return np.array(Xtrain_normalize[number*batch_size:(number+1)*batch_size]),\

np.array(Ytrain_onehot[number*batch_size:(number+1)*batch_size])

def get_train_batch(number,batch_size):
return np.array(Xtrain_normalize[number*batch_size:(number+1)batch_size],dtype=float),
np.array(Ytrain_onehot[number
batch_size:(number+1)*batch_size])

损失和评估函数

cross_entropy = -tf.reduce_mean(y * tf.log(y_pre))
train_op = tf.train.AdamOptimizer(lr).minimize(cross_entropy)
saver = tf.train.Saver()
correct_prediction = tf.equal(tf.argmax(y_pre,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, “float”))
sess = tf.Session()
sess.run(tf.global_variables_initializer())

startTime =time()
for i in range(5001):
batch_size = 50
batch_x,batch_y = get_train_batch(i,batch_size)

batch_x.reshape([-1, 32,32])

batch_x = np.array(batch_x)

print(“1111”,batch_x.shape)

batch_x.astype(np.float)

batch_y = np.array(batch_y)

print(batch_x,batch_y)

if (i+1)%100 == 0:

train_accuracy = sess.run(accuracy, feed_dict={

x:batch_x, y: batch_y, keep_prob: 1.0, batch_size: batch_size})

    loss,train_accuracy = sess.run([cross_entropy,accuracy], feed_dict={
    x:batch_x, y: batch_y,_batch_size: batch_size})
    epoch_list.append(i+1)
    loss_list.append(loss)
    accuracy_list.append(train_accuracy)
    # 已经迭代完成的 epoch 数: mnist.train.epochs_completed
    print("step %d, training accuracy %g" % ((i+1), train_accuracy))

sess.run(train_op, feed_dict={x: batch_x, y: batch_y, keep_prob: 0.5, batch_size: batch_size})

sess.run(train_op, feed_dict={x: batch_x, y: batch_y,_batch_size: batch_size})
saver.save(sess,'./CIFAR10_2rnn/cifar10_model.ckpt')

duration = time()-startTime
print(‘train finished takes:’,duration)

感悟:所有的报错 都是由于知识的局限性和思维想当然导致的

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值