Tensors
TensorFlow programs use a tensor data structure to represent all data -- only tensors are passed between operations in the computation graph. You can think of a TensorFlow tensor as an n-dimensional array or list. A tensor has a static type, a rank, and a shape. To learn more about how TensorFlow handles these concepts, see the Rank, Shape, and Type reference.
Variables
Variables maintain state across executions of the graph. The following example shows a variable serving as a simple counter. See Variables for more details.
# Create a Variable, that will be initialized to the scalar value 0.
state = tf.Variable(0, name="counter")
# Create an Op to add one to `state`.
one = tf.constant(1)
new_value = tf.add(state, one)
update = tf.assign(state, new_value)
# Variables must be initialized by running an `init` Op after having
# launched the graph. We first have to add the `init` Op to the graph.
init_op = tf.global_variables_initializer()
# Launch the graph and run the ops.
with tf.Session() as sess:
# Run the 'init' op
sess.run(init_op)
# Print the initial value of 'state'
print(sess.run(state))
# Run the op that updates 'state' and print 'state'.
for _ in range(3):
sess.run(update)
print(sess.run(state))
# output:
# 0
# 1
# 2
# 3
The assign()
operation in this code is a part of the expression graph just like the add()
operation, so it does not actually perform the assignment until run()
executes the expression.
You typically represent the parameters of a statistical model as a set of Variables. For example, you would store the weights for a neural network as a tensor in a Variable. During training you update this tensor by running a training graph repeatedly.
Fetches
To fetch the outputs of operations, execute the graph with a run()
call on the Session
object and pass in the tensors to retrieve. In the previous example we fetched the single node state
, but you can also fetch multiple tensors:
input1 = tf.constant([3.0])
input2 = tf.constant([2.0])
input3 = tf.constant([5.0])
intermed = tf.add(input2, input3)
mul = tf.mul(input1, intermed)
with tf.Session() as sess:
result = sess.run([mul, intermed])
print(result)
# output:
# [array([ 21.], dtype=float32), array([ 7.], dtype=float32)]
All the ops needed to produce the values of the requested tensors are run once (not once per requested tensor).
Feeds
The examples above introduce tensors into the computation graph by storing them in Constants
and Variables
. TensorFlow also provides a feed mechanism for patching a tensor directly into any operation in the graph.
A feed temporarily replaces the output of an operation with a tensor value. You supply feed data as an argument to a run()
call. The feed is only used for the run call to which it is passed. The most common use case involves designating specific operations to be "feed" operations by using tf.placeholder() to create them:
input1 = tf.placeholder(tf.float32)
input2 = tf.placeholder(tf.float32)
output = tf.mul(input1, input2)
with tf.Session() as sess:
print(sess.run([output], feed_dict={input1:[7.], input2:[2.]}))
# output:
# [array([ 14.], dtype=float32)]
A placeholder()
operation generates an error if you do not supply a feed for it. See the MNIST fully-connected feed tutorial (source code) for a larger-scale example of feeds.
Complete example from
import tensorflow as tf import numpy as np import matplotlib.pyplot as plt def add_layer(inputs,in_size,out_size,activation_function=None): #add one more layer and return the output of this layer Weights=tf.Variable(tf.random_normal([in_size,out_size])) biases=tf.Variable(tf.zeros([1,out_size])+0.1) Wx_plus_b=tf.matmul(inputs,Weights)+biases if activation_function is None: outputs=Wx_plus_b else: outputs=activation_function(Wx_plus_b) return outputs #Make up some real data x_data=np.linspace(-1,1,300)[:,np.newaxis] noise=np.random.normal(0,0.05,x_data.shape) y_data=np.square(x_data)-0.5+noise #define placeholder for inputs to network xs=tf.placeholder(tf.float32,[None,1]) ys=tf.placeholder(tf.float32,[None,1]) #add hidden layer l1=add_layer(xs,1,10,activation_function=tf.nn.relu) #add output layer prediction=add_layer(l1,10,1,activation_function=None) #the loss between prediction and real data loss=tf.reduce_mean(tf.reduce_sum(tf.square(ys-prediction), reduction_indices=[1])) train_step=tf.train.GradientDescentOptimizer(0.1).minimize(loss) #run the flow init=tf.initialize_all_variables() sess=tf.Session() sess.run(init) fig=plt.figure() ax=fig.add_subplot(1,1,1) ax.scatter(x_data,y_data) plt.ion() plt.show() for i in range(10000): sess.run(train_step,feed_dict={xs:x_data,ys:y_data}) if i%50==0: print(sess.run(loss,feed_dict={xs:x_data,ys:y_data})) try: ax.lines.remove(lines[0]) except Exception: pass prediction_value = sess.run(prediction, feed_dict={xs: x_data}) lines=ax.plot(x_data,prediction_value,'r-',lw=5) plt.pause(0.1)