TensorFlow-layers使用

TensorFlow-layers使用

硬件:NVIDIA-GTX1080

软件:Windows7、python3.6.5、tensorflow-gpu-1.4.0

一、基础知识

1、layers:可表示为对nn进行了一次封装,让layers更加易于操作

2、softmax_cross_entropy_with_logits = softmax + cross_entropy

二、代码展示以mnist为例

import tensorflow as tf
import tensorflow.examples.tutorials.mnist.input_data as input_data

mnist = input_data.read_data_sets('MNIST_data', one_hot = True)

xs = tf.placeholder(tf.float32, [None, 28*28])
ys = tf.placeholder(tf.float32, [None, 10])
keep_prob = tf.placeholder(tf.float32)

x_image = tf.reshape(xs, [-1, 28, 28, 1])

#conv1 (inputs, out_size, ksize, strides=(1, 1))
conv1_layer = tf.layers.conv2d(x_image, 32, 5, padding = 'SAME', activation = tf.nn.relu)

#pool1 (inputs, ksize, strides)
pool1_layer = tf.layers.max_pooling2d(conv1_layer, [2,2], [2,2], padding = 'SAME')

#conv2
conv2_layer = tf.layers.conv2d(pool1_layer, 64, 5, padding = 'SAME', activation = tf.nn.relu)

#pool2
pool2_layer = tf.layers.max_pooling2d(conv2_layer, [2,2], [2,2], padding = 'SAME')

#layer flat
flat_layer = tf.layers.flatten(pool2_layer)

#fc1 (inputs, out_size)
fc1_layer = tf.layers.dense(flat_layer, 128, activation=tf.nn.relu)

#dropout
dropout_layer = tf.layers.dropout(fc1_layer, keep_prob)

#fc2
fc2_layer = tf.layers.dense(dropout_layer, 10)

#loss(logits is not probability, labels is)
#softmax + cross_entropy
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = fc2_layer, labels = ys))
optimizer = tf.train.AdamOptimizer(0.001)
train_step = optimizer.minimize(cross_entropy)

init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)
    for step in range(2000):
        batch_xs, batch_ys = mnist.train.next_batch(100)
        sess.run(train_step, feed_dict = {xs: batch_xs, ys: batch_ys, keep_prob: 0.5})
        if step % 100 == 0:
            v_xs = mnist.test.images
            v_ys = mnist.test.labels
            #prediction = tf.nn.softmax(fc2_layer)
            #fc2_layer or prediction is good
            bool_pred = tf.equal(tf.argmax(fc2_layer, 1), tf.argmax(v_ys, 1))
            acc = tf.reduce_mean(tf.cast(bool_pred, tf.float32))
            output = sess.run(acc, feed_dict = {xs: v_xs, keep_prob: 1})
            print(output)

三、结果展示

Extracting MNIST_data\train-images-idx3-ubyte.gz
Extracting MNIST_data\train-labels-idx1-ubyte.gz
Extracting MNIST_data\t10k-images-idx3-ubyte.gz
Extracting MNIST_data\t10k-labels-idx1-ubyte.gz
0.1531
0.9581
0.9702
0.9751
0.9824
0.9842
0.9869
0.9866
0.987
0.9885
0.9873
0.9898
0.9903
0.9916
0.987
0.9905
0.9902
0.9911
0.9916
0.99

 

任何问题请加唯一QQ2258205918(名称samylee)!

唯一VX:samylee_csdn

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值