Up and Running with TensorFlow(第九章)

本文详细介绍了如何使用TensorFlow创建和管理计算图,包括线性回归的实现、梯度下降优化、数据馈送、模型保存与恢复、以及TensorBoard的使用。文章深入探讨了TensorFlow中的节点生命周期、变量共享、模块化编程以及如何在分布式环境中训练模型。通过实例展示了手动计算和自动计算梯度的方法,并比较了不同优化器的使用。此外,还讲解了使用占位符喂养数据和在TensorBoard中可视化图及训练曲线。
摘要由CSDN通过智能技术生成

Tensorflow定义计算图,并且可以把计算图分成几块在不同GPU上运行。

并且支持分布式训练,把计算均摊在数百个服务器上来训练超大规模的神经网络。

 

Creating Your First Graph and Running It in a Session

实例上下面的代码不进行任何计算,它只是创建一个计算图。

import tensorflow as tf

reset_graph()

x = tf.Variable(3, name="x")
y = tf.Variable(4, name="y")
f = x*x*y + y + 2   ###<tf.Tensor 'add_1:0' shape=() dtype=int32>

要计算上面的代码,需要打开一个session,session是把上面这个操作放到devices中,例如CPU,GPU。最后需要sess.close释放资源。如下

sess = tf.Session()
sess.run(x.initializer)
sess.run(y.initializer)
result = sess.run(f)
print(result)

 

每次都需要sess.run()显然太麻烦,还有一种简单的方式。在with块中,session被当成默认的session.并且结束会自动释放资源。

with tf.Session() as sess:
    x.initializer.run()
    y.initializer.run()
    result = f.eval()

x.initializer.run() is equivalent to calling tf.get_default_session().run(x.initializer)

还有一种更简单的方式来初始化。

init = tf.global_variables_initializer()

with tf.Session() as sess:
    init.run()
    result = f.eval()

 

当使用InteractiveSession时,和常规的Session不同的是,会自动将InteractiveSession设置为默认的Session,不需要with框架,但是需要在最后关闭以释放资源。

sess = tf.InteractiveSession()
init.run()
result = f.eval()
print(result)
sess.close()

 

Managing Graphs

任何设置的节点都会自动添进默认的图表。

x1 = tf.Variable(1)
x1.graph is tf.get_default_graph()

 

有时候需要管理多个独立的图表,这时候需要创建一个新Graph并且临时得把这个图放入一个with块中,使得这个图为默认图。

graph = tf.Graph()
with graph.as_default():
    x2 = tf.Variable(2)

x2.graph is graph  ###true
x2.graph is tf.get_default_graph() ###Flase

 

在实验中,你有可能会运行同样得一条指令多次,这样会导致你的默认图中拥有很多重复的节点。    有两种方法应对这种情况:

  1. 重启spyder
  2. 使用如下指令
tf.reset_default_graph()

Lifecycle of a Node Value

当你评估一个节点时(一个节点代表一个数值),tensorflow会自动的确定此节点所依赖的所有节点,并且首先为这些节点评估。参考如下代码

w = tf.constant(3)
x = w + 2
y = x + 5
z = x * 3

with tf.Session() as sess:
    print(y.eval())  # 10
    print(z.eval())  # 15

 

首先这个代码评估y,它发现y依赖x,x依赖w,所以这个session首先evaluate,w,然后x,最后y。在评估z的时候还是会从头评估w和x,不会利用之前的结果,也就是说这行代码评估w,x两次。要想只评估一次,需要使用如下代码

with tf.Session() as sess:
    y_val, z_val = sess.run([y, z])
    print(y_val)  # 10
    print(z_val)  # 15

所有节点的值在图运行之间都会被丢弃,除了variable values,它的值是由Session提供。

多个Session之间不会共享状态,即使他们运行的是同一张图,每一个Session都会拥有每一个变量的副本。在分布式tensorflow中,变量的状态存储在服务器上(Server),而不是session中,因此可以共享变量。

 

Linear Regression with TensorFlow

以下代码相比于直接使用numpy的直接计算,tensorflow可以把以下代码放在GPU中运行。直接使用fetch_california_housing这个函数会出错,先把如下代码放入linux中运行,得到cal_housing_py3.pkz文件,放入data_home='H:\paper\DeepLearning\Tensorflow\hand on with tensorflow\california_housing'中,即可运行成功。这里把numpy转成tensorflow 的节点

import numpy as np
from sklearn.datasets import fetch_california_housing

reset_graph()

housing = fetch_california_housing(data_home='H:\paper\DeepLearning\Tensorflow\hand on with                 
                                   tensorflow\california_housing')####假如数据下载不成功则自
                                   行下载,然后添加
m, n = housing.data.shape
housing_data_plus_bias = np.c_[np.ones((m, 1)), housing.data]

X = tf.constant(housing_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y")
XT = tf.transpose(X)
theta = tf.matmul(tf.matmul(tf.matrix_inverse(tf.matmul(XT, X)), XT), y)

with tf.Session() as sess:
    theta_value = theta.eval()

 使用numpy计算

X = housing_data_plus_bias
y = housing.target.reshape(-1, 1)
theta_numpy = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(y)

print(theta_numpy)

使用sklearn进行计算

from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(housing.data, housing.target.reshape(-1, 1))

print(np.r_[lin_reg.intercept_.reshape(-1, 1), lin_reg.coef_.T])

 

 

Implementing Gradient Descent

Manually Computing the Gradients

  1. random_uniform() 函数在图中创建一个产生tensor的节点,给定Shape和值的范围,可以产生带有随机数的tensor。与numpy的rand()类似。
  2. assign()为variable赋上一个新值。下面例子中θ(next step) = θ –η∇θMSE(θ) 
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaled_housing_data = scaler.fit_transform(housing.data)
scaled_housing_data_plus_bias = np.c_[np.ones((m, 1)), scaled_housing_data]

​​​​

reset_graph()

n_epochs = 1000
learning_rate = 0.01

X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name&#
Learning TensorFlow: A Guide to Building Deep Learning Systems By 作者: Tom Hope – Yehezkel S. Resheff – Itay Lieder ISBN-10 书号: 1491978511 ISBN-13 书号: 9781491978511 Edition 版本: 1 Release 出版日期: 2017-08-28 pages 页数: (242) List Price: $49.99 Book Description Roughly inspired by the human brain, deep neural networks trained with large amounts of data can solve complex tasks with unprecedented accuracy. This practical book provides an end-to-end guide to TensorFlow, the leading open source software library that helps you build and train neural networks for computer vision, natural language processing (NLP), speech recognition, and general predictive analytics. Authors Tom Hope, Yehezkel Resheff, and Itay Lieder provide a hands-on approach to TensorFlow fundamentals for a broad technical audience—from data scientists and engineers to students and researchers. You’ll begin by working through some basic examples in TensorFlow before diving deeper into topics such as neural network architectures, TensorBoard visualization, TensorFlow abstraction libraries, and multithreaded input pipelines. Once you finish this book, you’ll know how to build and deploy production-ready deep learning systems in TensorFlow. Get up and running with TensorFlow, rapidly and painlessly Learn how to use TensorFlow to build deep learning models from the ground up Train popular deep learning models for computer vision and NLP Use extensive abstraction libraries to make development easier and faster Learn how to scale TensorFlow, and use clusters to distribute model training Deploy TensorFlow in a production setting Contents Chapter 1 Introduction Chapter 2 Go with the Flow: Up and running with TensorFlow Chapter 3 Understanding TensorFlow Basics Chapter 4 Convolutional Neural Networks Chapter 5 Working with Text and Sequences + TensorBoard visualization Chapter 6 TF Abstractions and Simplification Chapter 7 Queues, Threads, and Reading Data Chapter 8 Distributed TensorFlow Chapter 9 Serving Models Chapter 10 Miscellaneous
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值