tensorflow学习2-mnsit的nn案例

来源与详解可看中文社区教程:中文社区mnist进阶教程

有几点我想说一说:

1.尺寸的变化。我觉得这个非常重要是操作的线索,刚学瞎比比,不正确的话还望指出来。

我一开始的理解是:图片输入28×28 卷积核5×5,步长5,得到24×24,池化得到12×12.再一次卷积得到8×8,池化后4×4

可是代码里明明是说7×7啊。

原来问题出在这儿:

tf.nn.conv2d(x, w, strides=[1, 1, 1, 1], padding='SAME')
padding的设置:

SAME means that the output feature map has the same spatial dimensions as the input feature map. Zero padding is introduced to make the shapes match as needed, equally on every side of the input map.
VALID means no padding.

2.Dropout层,这个操作有些懵逼,只是了解了下:通过概率,防止过拟合。占坑以后有空学习了再补。
一般在训练时用,测试时不用,所以代码里是这样写的:
 train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
print "test accuracy %g"%accuracy.eval(feed_dict={
    x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0})
3.闲来无事,想看看我这8G显存能跑多大数据。一点点,笑哭。一直在最后一步报memory out。

想起来是到test那一步才报错,train时是有batch的,而test是所有图一次进去,我也给test加了个batch ,好了menmory out解决。
for j in range(2):
  	batch_t = mnist.test.next_batch(1000)
	print "test accuracy %g"%accuracy.eval(feed_dict={x:batch_t[0], y_: batch_t[1], keep_prob: 1.0})

好啦,上代码:
#! /usr/bin/env python
# _*_ coding: utf-8 _*_

import tensorflow as tf

#下载安装数据集
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("workspace/mnist/data", one_hot=True)
#运行tensorflow的InterractiveSession
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.90)   
sess = tf.InteractiveSession(config=tf.ConfigProto(gpu_options=gpu_options)) 
#占位符
x = tf.placeholder("float", shape=[None, 784])
y_ = tf.placeholder("float", shape=[None, 10])
###构建一个多层卷积网络
#权重初始化
def weight_variable(shape):
	initial = tf.truncated_normal(shape, stddev=0.1)
	return tf.Variable(initial)

def bias_variable(shape):
	initial = tf.constant(0.1, shape=shape)
	return tf.Variable(initial)
#卷积和池化
def conv2d(x, w):
	return tf.nn.conv2d(x, w, strides=[1, 1, 1, 1], padding='SAME')

def max_pool_2x2(x):
	return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
#第一层卷积
w_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
x_image = tf.reshape(x, [-1, 28, 28, 1])#-1是表示在那个维度的维度数目是根据计算自动调整的具体可看API
h_conv1 = tf.nn.relu(conv2d(x_image, w_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
#第二层卷积
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
#密集连接层
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
#Dropout
keep_prob = tf.placeholder("float")
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
#输出层
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
#训练和评估模型
cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))#tf.case类型转换
sess.run(tf.global_variables_initializer())
for i in range(10000):
  batch = mnist.train.next_batch(100)
  if i%100 == 0:
    train_accuracy = accuracy.eval(feed_dict={
        x:batch[0], y_: batch[1], keep_prob: 1.0})
    print "step %d, training accuracy %g"%(i, train_accuracy)
  train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})

print "test accuracy %g"%accuracy.eval(feed_dict={
    x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0})
运算的结果么:



  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值