1. 初探MNIST实例-Softmax线性分类器

由于国外的网站需要翻墙才能拿到,国内的同学可以到这里下载数据:http://yann.lecun.com/exdb/mnist/

注意下载下来的数据千万不要解压,demo读的就是gz压缩包。

mnist数据是一批人工写的手写0~9数字的数据集。一般用作机器学习的初学者的第一个实验,相当于编程时的Hello World测试程序。

可以参考极客的解析:http://wiki.jikexueyuan.com/project/tensorflow-zh/tutorials/mnist_beginners.html

也可以参考TensorFlow的GitHub例子:https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/tutorials/mnist

 

import sys 
import numpy as np
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

default_dir = '/home/lawenliu/tensorflow/mnist_data'

def main(data_dir):
    # 1. read data from mnist directory
    mnist = input_data.read_data_sets(data_dir, one_hot=True)
    # 2. get formula as: y = Wx + b
    # normally we use placeholder to import data
    x = tf.placeholder(tf.float32, [None, 784])
    W = tf.Variable(tf.zeros([784, 10]))
    b = tf.Variable(tf.zeros([10]))
    y = tf.matmul(x, W) + b 

    # 3. y is actual value we calculated, y_ is expected value
    y_ = tf.placeholder(tf.float32, [None, 10])

    # 4. we use cross entropy as loss function
    #cross_entropy = -tf.reduce_sum(y_*tf.log(y))
    # because original cross entroy is numerically unstable, we use softmax cross entropy with logits here.
    cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
    # 5. we use gradient descent to find best W
    train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)

    # 6. start session for training
    sess = tf.InteractiveSession()
    # 7. initilaize global variable
    tf.global_variables_initializer().run()
    # 8. start training, fetch 1000 batches
    for _ in range(1000):
        # each time fetch 100 pictures for training
        # batch_xs is matrix of image, batch_ys is result vector
        batch_xs, batch_ys = mnist.train.next_batch(100)
        # using batch_xs and batch_ys to fill placeholder
        sess.run(train_step, feed_dict={x: batch_xs, y_:batch_ys})

    # 9. try to predict
    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) 
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    print(sess.run(accuracy, feed_dict={x: mnist.test.images,
                                        y_: mnist.test.labels}))

if __name__ == '__main__':
    # disable warning log
    old_v = tf.logging.get_verbosity()
    tf.logging.set_verbosity(tf.logging.ERROR)

    # using default directory if no parameter provide
    data_dir = default_dir
    if len(sys.argv) > 1:
        data_dir = sys.argv[1]

    main(data_dir)

    # recover logging setting
    tf.logging.set_verbosity(old_v)

结果如下:

/data/libs/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
Extracting /home/wenchuang.liu/tensorflow/mnist_data/train-images-idx3-ubyte.gz
Extracting /home/wenchuang.liu/tensorflow/mnist_data/train-labels-idx1-ubyte.gz
Extracting /home/wenchuang.liu/tensorflow/mnist_data/t10k-images-idx3-ubyte.gz
Extracting /home/wenchuang.liu/tensorflow/mnist_data/t10k-labels-idx1-ubyte.gz
2018-12-12 16:19:39.112774: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-12-12 16:19:39.308965: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-12-12 16:19:39.310681: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: 
name: Tesla P100-PCIE-16GB major: 6 minor: 0 memoryClockRate(GHz): 1.3285
pciBusID: 0000:00:08.0
totalMemory: 15.90GiB freeMemory: 15.61GiB
2018-12-12 16:19:39.486124: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-12-12 16:19:39.487825: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 1 with properties: 
name: Tesla P100-PCIE-16GB major: 6 minor: 0 memoryClockRate(GHz): 1.3285
pciBusID: 0000:00:09.0
totalMemory: 15.90GiB freeMemory: 950.88MiB
2018-12-12 16:19:39.487911: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0, 1
2018-12-12 16:19:40.605796: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-12-12 16:19:40.605870: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988]      0 1 
2018-12-12 16:19:40.605879: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0:   N N 
2018-12-12 16:19:40.605892: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 1:   N N 
2018-12-12 16:19:40.606479: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 15123 MB memory) -> physical GPU (device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:08.0, compute capability: 6.0)
2018-12-12 16:19:40.607499: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 661 MB memory) -> physical GPU (device: 1, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:09.0, compute capability: 6.0)
0.8724

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值