CS20SI Tensorflow for Deeplearning课程笔记(三)

  1. 逻辑回归在Tensorflow中的示例
    总共有以下几个步骤
    • 读取数据
    • 定义placeholder给特征和标签
    • 定义权重和偏差变量
    • 建立模型
    • 定义损失函数
    • 定义优化算法
    • 初始化各变量以及求batch的size
    • 训练模型,累加误差
    • 测试集测试,输出准确度

函数如下所示

"""
Starter code for logistic regression model to solve OCR task 
with MNIST in TensorFlow
MNIST dataset: yann.lecun.com/exdb/mnist/
"""

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import time

# Define parameters for the model
learning_rate = 0.01
batch_size = 128
n_epochs = 20

# Step 1: Read in data
# using TF Learn's built in function to load MNIST data to the folder data/mnist
mnist = input_data.read_data_sets('MNIST_data/', one_hot=True)
in_units = 784
h1_units = 512
h2_units = 380
out_units = 10

# Step 2: create placeholders for features and labels
# each image in the MNIST data is of shape 28*28 = 784
# therefore, each image is represented with a 1x784 tensor
# there are 10 classes for each image, corresponding to digits 0 - 09. 
X = tf.placeholder(tf.float32, [None, in_units], name='X_placeholder')
Y = tf.placeholder(tf.float32, [None, out_units], name='Y_placeholder')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')

# Step 3: create MLP

W_h1 = tf.Variable(tf.random_normal([in_units, h1_units]), name='weights_hide_layer_1')
h1 = tf.nn.relu(tf.matmul(X, W_h1))
h1_dropout = tf.nn.dropout(h1, keep_prob)

W_h2 = tf.Variable(tf.random_normal([h1_units, h2_units]), name='weights_hide_layer_2')
h2 = tf.nn.relu(tf.matmul(h1_dropout, W_h2))

W_out = tf.Variable(tf.random_normal([h2_units, out_units]), name='weights_out_layer')
y = tf.matmul(h2, W_out)

# Step 4: define loss function
# use cross entropy loss of the real labels with the softmax of logits
# use the method:
# tf.nn.softmax_cross_entropy_with_logits(logits, Y)
# then use tf.reduce_mean to get the mean loss of the batch
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y, labels=Y, name='loss'))

# Step 5: define training op
# using gradient descent to minimize loss
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(loss)

with tf.Session() as sess:
    # to visualize using TensorBoard
    writer = tf.summary.FileWriter('./graphs', sess.graph)

    start_time = time.time()
    sess.run(tf.global_variables_initializer())
    n_batches = int(mnist.train.num_examples / batch_size)
    for i in range(n_epochs):  # train the model n_epochs times
        total_loss = 0

        for _ in range(n_batches):
            X_batch, Y_batch = mnist.train.next_batch(batch_size)

            # TO-DO: run optimizer + fetch loss_batch
            _, loss_batch = sess.run([optimizer, loss], feed_dict={X: X_batch, Y: Y_batch, keep_prob: 0.75})

            total_loss += loss_batch
        print('Average loss epoch {0}: {1}'.format(i, total_loss / n_batches))

    print('Total time: {0} seconds'.format(time.time() - start_time))

    print('Optimization Finished!')  # should be around 0.35 after 25 epochs

    # test the model
    n_batches = int(mnist.test.num_examples / batch_size)
    total_correct_preds = 0
    for i in range(n_batches):
        X_batch, Y_batch = mnist.test.next_batch(batch_size)
        _, loss_batch, logits_batch = sess.run([optimizer, loss, y], feed_dict={X: X_batch, Y: Y_batch, keep_prob: 1.0})
        preds = tf.nn.softmax(logits_batch)
        correct_preds = tf.equal(tf.argmax(preds, 1), tf.argmax(Y_batch, 1))
        accuracy = tf.reduce_sum(tf.cast(correct_preds, tf.float32))  # need numpy.count_nonzero(boolarr) :(
        total_correct_preds += sess.run(accuracy)

    print('Accuracy {0}%'.format(100 * total_correct_preds / mnist.test.num_examples))
writer.close()

2.TF中的优化方法
tf.train.GradientDescentOptimizer
tf.train.AdadeltaOptimizer
tf.train.AdagradOptimizer
tf.train.AdagradDAOptimizer
tf.train.MomentumOptimizer
tf.train.AdamOptimizer
tf.train.FtrlOptimizer
tf.train.ProximalGradientDescentOptimizer
tf.train.ProximalAdagradOptimizer
tf.train.RMSPropOptimizer

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值