Tensor: tensorflow中tensor数据结构代表所有的数据,在计算图中承载数据在操作间传递。tensor是n-dim的数组或列表,一个tensor包含一个静态类型rank和一个shape。
Variables:维护图执行过程中的状态信息。通常会将一个统计模型中的参数表示为一组变量。
Fetch:取回操作的输出内容
Feed: 可以临时代替图中的任意操作中的tensor,可以对图中任何操作提交补丁,直接插入一个tensor
以mnist为例熟悉一些操作:
x = tf.placeholder("float",shape=[None,784])
W = tf.Variable(tf.zeros([784,10]))
sess.run(tf.initialize_all_variables())
y = tf.nn.softmax(tf.matmul(x,W) + b)
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
for i in range(1000):
batch = mnist.train.next_batch(50)
train_step.run(feed_dict = {x:batch[0], y_batch[1]})
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
以上为完成softmax回归所用过的tensorflow参数,还是很简洁明了的,下面尝试用卷积网络来实现一下。
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev = 0.1)
return tf.Variable(initial)
def conv2d(x,W):
return tf.nn.conv2d(x, W, strides = [1,1,1,1], padding = 'SAME')
def max_pool_2*2(x):
return tf.nn.max_pool(x, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')
W_conv1 = weight_variable([5,5,1,32])
b_conv1 = bias_variable([32])
x_image = tf.reshape(x,[-1,28,28,1])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2*2(h_conv1) 第一层卷积,后面的和前面一样处理
W_fc1 = weight_variable([7*7*64, 1024]) fc层
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64]) 对第二层的pooling输出做flat操作
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
keep_prob = tf.placeholder("float")
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob) dropout层,这里不用设置dropout的值么?
W_fc2 = tf.Variable([1024,10])
b_fc2 = bias_variable([10])
y_conv = tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)