tensorflow实现机制深度学习网络

1.TensorFlow程序会典型的分为两部分

第一部分是创建计算图,叫做构建阶段。

这一阶段通常建立表示机器学习模型的的计算图,和需要去训练模型的计算图。

第二部分是执行阶段。

执行阶段通常运行Loop循环重复训练步骤,每一步训练小批量数据,逐渐的改进模型参数。

以下代码构建了全连接网络和卷积网络,只有第一阶段构建计算图不一样,其他都是一样的,可分别注释测试。

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import numpy as np

# 阶段一:构建图阶段
# #(1)全连接网络
# n_inputs = 28*28
# n_hidden1 = 300
# n_hidden2 = 100
# n_outputs = 10
#
# X = tf.placeholder(tf.float32, shape=(None, n_inputs), name='X')
# y = tf.placeholder(tf.int64, shape=(None), name='y')
#
# ### 构建神经网络层
# w1 = tf.Variable(tf.truncated_normal([n_inputs, n_hidden1], stddev=0.1))
# b1 = tf.Variable(tf.zeros([n_hidden1]))
# w2 = tf.Variable(tf.truncated_normal([n_hidden1, n_hidden2], stddev=0.1))
# b2 = tf.Variable(tf.zeros([n_hidden2]))
# w3 = tf.Variable(tf.truncated_normal([n_hidden2, n_outputs], stddev=0.1))
# b3 = tf.Variable(tf.zeros([n_outputs]))
#
# y1 = tf.nn.tanh(tf.matmul(X, w1) + b1) # 第一层参数,偏置,输出
# y2 = tf.nn.tanh(tf.matmul(y1, w2) + b2) # 第二层参数,偏置,输出
# logits = tf.matmul(y2, w3) + b3


# (2)卷积网络
n_inputs = 28*28
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name='X')
y = tf.placeholder(tf.int64, shape=(None), name='y')
x_image = tf.reshape(X, [-1, 28, 28, 1])

### 构建神经网络层
W_conv1 = tf.Variable(tf.truncated_normal(shape = [5, 5, 1, 32], stddev=0.1))
b_conv1 = tf.Variable(tf.constant(0.1, shape = [32]))
W_conv2 = tf.Variable(tf.truncated_normal(shape = [5, 5, 32, 64], stddev=0.1))
b_conv2 = tf.Variable(tf.constant(0.1, shape = [64]))
W_fc1 = tf.Variable(tf.truncated_normal(shape = [7 * 7 * 64, 1024], stddev=0.1))
b_fc1 = tf.Variable(tf.constant(0.1, shape = [1024]))
W_fc2 = tf.Variable(tf.truncated_normal(shape = [1024, 10], stddev=0.1))
b_fc2 = tf.Variable(tf.constant(0.1, shape = [10]))

h_conv1 = tf.nn.relu(tf.nn.conv2d(x_image, W_conv1, strides=[1, 1, 1, 1], padding='SAME') + b_conv1)
h_pool1 = tf.nn.max_pool(h_conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
h_conv2 = tf.nn.relu(tf.nn.conv2d(h_pool1, W_conv2, strides=[1, 1, 1, 1], padding='SAME') + b_conv2)
h_pool2 = tf.nn.max_pool(h_conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
h_pool2_flat = tf.reshape(h_pool2, [-1, 7 * 7 * 64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
# # 防止过拟合,使用Dropout层
# keep_prob = tf.placeholder(tf.float32)
# h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
logits = tf.matmul(h_fc1, W_fc2) + b_fc2



### 构建交叉熵损失函数,并且求个样本平均
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")

#####################################训练
#构建优化器
learning_rate = 0.01

optimizer = tf.train.GradientDescentOptimizer(learning_rate)
training_op = optimizer.minimize(loss)  #training_op整个计算图,优化loss最小

#构建评估逻辑
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))


########################################
init = tf.global_variables_initializer()  #初始化所有variables变量,
                                          #并没有执行,在session内执行

saver = tf.train.Saver()        #存储变量模型参数

# 计算图阶段
mnist = input_data.read_data_sets("MNIST_data_bak/")
n_epochs = 400
batch_size = 50

with tf.Session() as sess:
    init.run()
    for epoch in range(n_epochs):
        for iteration in range(mnist.train.num_examples // batch_size):
            X_batch, y_batch = mnist.train.next_batch(batch_size)
            sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
        acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
   #等价acc_train = sess.run(accuracy, feed_dict={X: X_batch, y: y_batch})
        acc_test = accuracy.eval(feed_dict={X: mnist.test.images,
                                            y: mnist.test.labels})
        print(epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)

    save_path = saver.save(sess, "./my_dnn_model_final.ckpt")


# 使用模型预测
"""
with tf.Session as sess:
    saver.restore(sess, "./my_dnn_model_final.ckpt")
    X_new_scaled = [...]
    Z = logits.eval(feed_dict={X: X_new_scaled})
    y_pred = np.argmax(Z, axis=1)  # 查看最大的类别是哪个
"""













 

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值