求助,SAE算法的输出准确率和mnist数据集训练测试比例修改

我从网上找了一个SAE算法的代码,但是只有缺失率,没有准确率输出,还有mnist数据集的训练和测试比例不知道怎么修改。

如果可以的话,我想加一个功能实现:单独设置一个目录放我想识别的图片,最后能有对应的数字结果输出。

环境是Windows下装的python3.6 

这个是我找的代码:

from __future__ import division, print_function, absolute_import

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt


from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data", one_hot=True)


learning_rate = 0.01#学习率
training_epochs = 20#训练的次数
batch_size = 256
display_step = 1
examples_to_show = 10


n_hidden_1 = 256 # 隐层1神经元个数
n_hidden_2 = 128 # 隐层2神经元个数
n_input = 784 
n_output=10

# tf Graph input (only pictures)
X = tf.placeholder("float", [None, n_input])  
Y=tf.placeholder('float',[None,10])      


weights = {
    'encoder_h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
    'encoder_h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
    'decoder_h1': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_1])),
    'decoder_h2': tf.Variable(tf.random_normal([n_hidden_1, n_input])),
    'softmax_w': tf.Variable(tf.random_normal([n_hidden_2, n_output])),
}
biases = {
    'encoder_b1': tf.Variable(tf.random_normal([n_hidden_1])),
    'encoder_b2': tf.Variable(tf.random_normal([n_hidden_2])),
    'decoder_b1': tf.Variable(tf.random_normal([n_hidden_1])),
    'decoder_b2': tf.Variable(tf.random_normal([n_input])),
    'softmax_b': tf.Variable(tf.random_normal([n_output])),
}

# Building the encoder
def encoder(x):
    # Encoder Hidden layer with sigmoid activation #1
    layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x, weights['encoder_h1']),
                                   biases['encoder_b1']))


    # Encoder Hidden layer with sigmoid activation #2
    layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights['encoder_h2']),
                                   biases['encoder_b2']))
    return layer_2


# Building the decoder
def decoder(x):
    # Decoder Hidden layer with sigmoid activation #1
    layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x, weights['decoder_h1']),
                                   biases['decoder_b1']))
    # Decoder Hidden layer with sigmoid activation #2
    layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights['decoder_h2']),
                                   biases['decoder_b2']))
    return layer_2

# Construct model 

encoder_op = encoder(x)
decoder_op = decoder(encoder_op)

# Prediction 
y_pred = decoder_op
# Targets (Labels) are the input data. 
y_true = X

# Define loss and optimizer, minimize the squared error 
cost = tf.reduce_mean(tf.pow(y_true - y_pred, 2))  
optimizer = tf.train.RMSPropOptimizer(learning_rate).minimize(cost)


# Initializing the variables  
init = tf.global_variables_initializer()

sess = tf.InteractiveSession() 
sess.run(init)

total_batch = int(mnist.train.num_examples/batch_size) 
# Training cycle  
for epoch in range(training_epochs):
    # Loop over all batches 
    for i in range(total_batch):
        batch_xs, batch_ys = mnist.train.next_batch(batch_size) 
        # Run optimization op (backprop) and cost op (to get loss value) 
        _, c = sess.run([optimizer, cost], feed_dict={X: batch_xs})
    # Display logs per epoch step
    if epoch % display_step == 0: 
        print("Epoch:", '%04d' % (epoch+1),
              "cost=", "{:.9f}".format(c))

print("Optimization Finished!") 

# Applying encode and decode over test set 
encode_decode = sess.run(  
    y_pred, feed_dict={X: mnist.test.images[:examples_to_show]})
# Compare original images with their reconstructions 
f, a = plt.subplots(2, 10, figsize=(10, 2)) 
for i in range(examples_to_show):
    a[0][i].imshow(np.reshape(mnist.test.images[i], (28, 28))) 
    a[1][i].imshow(np.reshape(encode_decode[i], (28, 28)))
f.show()   
plt.draw() 

希望大佬,前辈帮帮忙,感激不尽

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值