win7+GTX1060配置及运行TensorFlow

本机配置

  • 操作系统: Windows 7 旗舰版 64位 SP1
  • 处理器: AMD A8-7600 Radeon R7, 10 Compute Cores 4C+6G 四核
  • 主板: 铭瑄 MS-A88FX FS
  • 内存:16 GB
  • 显卡: Nvidia GeForce GTX 1060 3GB ( 3 GB / Nvidia )

配置TensorFlow

  1. 安装Python3版本的Anaconda(目前TensorFlow只支持Python3),安装完成后打开IPython看能否正常运行。
  2. 安装TensorFlow。
pip install  --ignore-installed --ignore-installed https://storage.googleapis.com/tensorflow/windows/gpu/tensorflow_gpu-0.12.0-cp35-cp35m-win_amd64.whl
  1. 安装cuda8.0,默认安装,不要升级显卡驱动。
  2. 安装cudnn5.1,解压后将里面的内容放到CUDA的安装目录(C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0)下面相对应的目录里面,bin的放到bin,lib的放到lib,include的放到include
  3. 启动Python,输入import tensorflow as tf回车,无报错说明安装成功,若遇到无法加载等错误可参考此链接
  4. 编写MNIST字符集的logistic regression测试脚本
import os
import numpy as np
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import time

#使用tensorflow自带的工具加载MNIST手写数字集合
mnist = input_data.read_data_sets('./data/mnist', one_hot=True) 

#查看一下数据维度
mnist.train.images.shape

#查看target维度
mnist.train.labels.shape

batch_size = 128
X = tf.placeholder(tf.float32, [batch_size, 784], name='X_placeholder') 
Y = tf.placeholder(tf.int32, [batch_size, 10], name='Y_placeholder')
w = tf.Variable(tf.random_normal(shape=[784, 10], stddev=0.01), name='weights')
b = tf.Variable(tf.zeros([1, 10]), name="bias")
logits = tf.matmul(X, w) + b

# 求交叉熵损失
entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=Y, name='loss')
# 求平均
loss = tf.reduce_mean(entropy)

learning_rate = 0.01
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(loss)
#迭代总轮次
n_epochs = 30

with tf.Session() as sess:
    # 在Tensorboard里可以看到图的结构
    writer = tf.summary.FileWriter('G:\logs\logistic_reg', sess.graph)

    start_time = time.time()
    sess.run(tf.global_variables_initializer()) 
    n_batches = int(mnist.train.num_examples/batch_size)
    for i in range(n_epochs): # 迭代这么多轮
        total_loss = 0

        for _ in range(n_batches):
            X_batch, Y_batch = mnist.train.next_batch(batch_size)
            _, loss_batch = sess.run([optimizer, loss], feed_dict={X: X_batch, Y:Y_batch}) 
            total_loss += loss_batch
        print('Average loss epoch {0}: {1}'.format(i, total_loss/n_batches))

    print('Total time: {0} seconds'.format(time.time() - start_time))

    print('Optimization Finished!')

    # 测试模型

    preds = tf.nn.softmax(logits)
    correct_preds = tf.equal(tf.argmax(preds, 1), tf.argmax(Y, 1))
    accuracy = tf.reduce_sum(tf.cast(correct_preds, tf.float32))

    n_batches = int(mnist.test.num_examples/batch_size)
    total_correct_preds = 0

    for i in range(n_batches):
        X_batch, Y_batch = mnist.test.next_batch(batch_size)
        accuracy_batch = sess.run([accuracy], feed_dict={X: X_batch, Y:Y_batch}) 
        total_correct_preds += accuracy_batch[0]

    print('Accuracy {0}'.format(total_correct_preds/mnist.test.num_examples))

    writer.close()

程序运行结果:

Average loss epoch 0: 0.386188891549141
Average loss epoch 1: 0.2928179784185125
Average loss epoch 2: 0.2852246012616824
Average loss epoch 3: 0.27798929725424115
Average loss epoch 4: 0.2735775725130157
Average loss epoch 5: 0.27435725642528846
Average loss epoch 6: 0.27225190203089816
Average loss epoch 7: 0.26754918753545043
Average loss epoch 8: 0.2679311280900782
Average loss epoch 9: 0.26619768052390125
Average loss epoch 10: 0.2657872581711182
Average loss epoch 11: 0.26270394603828173
Average loss epoch 12: 0.2634125579070378
Average loss epoch 13: 0.26032830872041085
Average loss epoch 14: 0.26209098145817267
Average loss epoch 15: 0.25789975548610267
Average loss epoch 16: 0.25587266562007244
Average loss epoch 17: 0.260971031405709
Average loss epoch 18: 0.25762249581463686
Average loss epoch 19: 0.2562635491986375
Average loss epoch 20: 0.2569686156678033
Average loss epoch 21: 0.25794098736383975
Average loss epoch 22: 0.2525084098249604
Average loss epoch 23: 0.2554589692147184
Average loss epoch 24: 0.25341148514708717
Average loss epoch 25: 0.2505091481379696
Average loss epoch 26: 0.2527797804984735
Average loss epoch 27: 0.25004024197518965
Average loss epoch 28: 0.2527508559552106
Average loss epoch 29: 0.25222783740653304
Total time: 90.89096117019653 seconds
Optimization Finished!
Accuracy 0.9145
  1. 上一步会将运行形成的graph保存到程序中的自定义目录G:\logs\logistic_reg下,在cmd中首先进入G盘,然后使用如下命令启动tensorboard:
tensorboard --logdir=logs

之后根据提示的IP和port在谷歌浏览器中打开
image

参考链接

  1. http://www.cnblogs.com/zlslch/p/6964983.html
  2. http://blog.csdn.net/infovisthinker/article/details/54705826
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值