1、传入一个列表到计算图中
#-*- coding:utf-8 -*-
import tensorflow as tf
import numpy as np
sess = tf.Session()
x_vals = np.array([1.0, 3.0, 5.0, 7.0, 9.0])
x_data = tf.placeholder(tf.float32)
m_const = tf.constant(3.0)
my_product = tf.multiply(x_data, m_const)
for x_val in x_vals:
print(sess.run(my_product, feed_dict={x_data:x_val}))
#write out log for tensorboard
writer = tf.summary.FileWriter("log", tf.get_default_graph())
writer.close()
#3.0
#9.0
#15.0
#21.0
#27.0
使用tensorboard查看生成的计算图
tensorboard --logdir=path/logdir
2、传入数据后改变数据的形状
#-*- coding:utf-8 -*-
import tensorflow as tf
import numpy as np
sess = tf.Session()
my_array = np.array([[1.0, 3.0, 5.0, 7.0, 9.0],
[-2.0, 0.0, 2.0, 4.0, 6.0],
[-6.0, -3.0, 0.0, 3.0, 6.0]])
x_vals = np.array([my_array, my_array + 1])
x_data = tf.placeholder(tf.float32, shape=(3, 5))
m1 = tf.constant([[1.0],[0.0],[-1.0],[2.0],[4.0]])
m2 = tf.constant([[2.0]])
a1 = tf.constant([[10.0]])
prod1 = tf.matmul(x_data, m1)
prod2 = tf.matmul(prod1, m2)
add1 = tf.add(prod2, a1)
for x_val in x_vals:
print(sess.run(add1, feed_dict={x_data:x_val}))
writer = tf.summary.FileWriter("log002", tf.get_default_graph())
writer.close()
#[[102.]
# [ 66.]
# [ 58.]]
#[[114.]
# [ 78.]
# [ 70.]]
使用tensorboard查看生成的计算图
3、TensorFlow的图像函数是处理四维图片,这四维是:图片数量、高度、宽度和颜色通道。
Out = (W - F + 2P) / S + 1,W是输入形状,F是过滤器形状,P是padding的大小,S是步长形状。
#-*- coding:utf-8 -*-
import tensorflow as tf
import numpy as np
sess = tf.Session()
x_shape = [1, 4, 4, 1]
x_val = np.random.uniform(size=x_shape)
x_data = tf.placeholder(tf.float32, shape=x_shape)
my_filter = tf.constant(0.25, shape=[2, 2, 1, 1])
my_strides = [1, 2, 2, 1]
mov_avg_layer = tf.nn.conv2d(x_data, my_filter, my_strides, padding='SAME', name='Moving_Avg_Window')
def custom_layer(input_matrix):
input_matrix_sqeezed = tf.squeeze(input_matrix)
A = tf.constant([[1.0, 2.0], [-1.0, 3.0]])
b = tf.constant(1.0, shape=[2, 2])
temp1 = tf.matmul(A, input_matrix_sqeezed)
temp = tf.add(temp1, b)
return(tf.sigmoid(temp))
with tf.name_scope('Custom_Layer') as scope:
custom_layer1 = custom_layer(mov_avg_layer)
print(sess.run(custom_layer1, feed_dict={x_data:x_val}))
writer = tf.summary.FileWriter('log003', tf.get_default_graph())
writer.close()
#[[0.91430384 0.9169417 ]
# [0.8906263 0.88187516]]
使用tensorboard查看生成的计算图(右边为展开图)
=====================================================================
tensorflow2.0之后注意和很早期的tensorflow版本区别,可能已经没有了tf.layer模块,因为目前暂时还没出现。感觉应该是在优化,毕竟没必要同时出现很多功能重复的函数。
关于layer层tf.nn提供的方法有:(可以参考原来的tf.layer)
- avg_pool1d(…): 一维平均池化层
- avg_pool2d(…): 二维平均池化层
- avg_pool3d(…): 三维平均池化层
- batch_normalization(…): 批量标准化层
- conv1d(…): 一维卷积层
- conv1d_transpose(…): 一维反卷积层
- conv2d(…): 二维卷积层
- conv2d_transpose(…): 二维反卷积层
- conv3d(…): 三维卷积层
- conv3d_transpose(…): 三维反卷积层
- dropout(…): Dropout层
- max_pool1d(…): 一维最大池化层
- max_pool2d(…): 二维最大池化层
- max_pool3d(…): 三维最大池化层
- separable_conv2d(…): 二维深度可分离卷积层