(二)神经网络之Tensorflow函数解析

环境

  • 每篇5个函数
  • Tensorflow1.12.0
  • Ubuntu18.04
  • numpy1.15.4

0 数据类型

【参数列表】

序号类别描述
1实数tf.float32, tf.float64
2整数tf.int8, tf.int16, tf.int32, tf.int64, tf.uint8
3布尔tf.bool
4复数tf.complex
5dtype类型声明参数,dtype=tf.float32
import tensorflow as tf
import numpy as np

1 tf.agrmax(input, axis=None, name=None, dimension=None)

功能:输出数据最大值的行或列的下标.
【参数列表】

序号参数描述
1input输入Tensor
2axis0表示按列取最大值下标,1表示按行取最大值下标
3name名称
4dimension0表示按列取最大值下标,1表示按行取最大值下标,默认axis优先
  • Demo
import tensorflow as tf
v1 = tf.get_variable("v1", shape=[4, 4], initializer=tf.truncated_normal_initializer(stddev=0.1))
outColumn = tf.argmax(v1, axis=0)
outRow = tf.argmax(v1, axis=-1)
with tf.Session() as sess:
	init_op = tf.global_variables_initializer()
	sess.run(init_op)
	print("v1 value: {}".format(sess.run(v1)))
	print("Output for column compare: {}".format(sess.run(outColumn)))
	print("Output for row compare: {}".format(sess.run(outRow)))
  • Reuslt
v1 value: [[ 0.00715172  0.02058311 -0.16516998  0.05045537]
 [-0.00027448 -0.02315778  0.03688991 -0.03508867]
 [ 0.04394269 -0.06669991  0.0056747  -0.17623347]
 [ 0.09381374 -0.1411826   0.04640585 -0.09191368]]
Output for column compare: [3 0 3 0]
Output for row compare: [3 2 0 0]
  • Analysis
    (1) 生成随机数据v1,维度4x4;
行/列0123
00.007151720.02058311-0.165169980.05045537
1-0.00027448-0.023157780.03688991-0.03508867
20.04394269-0.066699910.0056747-0.17623347
30.09381374-0.14118260.04640585-0.09191368

(2) 按列比计较:3 0 3 0
(3) 按行比较: 3 2 0 0

2 tf.cast(x, dtype, name=None)

功能:数据类型转换.
【参数列表】

序号参数描述
1x输入数据
2dtype目标数据类型
3name变量名
  • Demo
import tensorflow as tf
v1 = tf.Variable([1.0, 2.0], dtype=tf.float32)
out = tf.cast(v1, tf.int32)
print("Source data type: {}".format(v1.dtype))
print("Trans data type: {}".format(out.dtype))
  • Result
Source data type: <dtype: 'float32_ref'>
Trans data type: <dtype: 'int32'>
  • Analysis
    float32类型转为int32类型.

3 tf.reshape(tensor, shape, name)

功能:改变数据维度.

3.1 参数

【参数列表】

序号参数描述
1tensor输入数据
2shape目标维度
3name变量名

3.2 Demo

3.2.1 指定具体维度

# 随机数据维度:2x4
v1 = tf.get_variable("v1", shape=[2, 4], initializer=tf.truncated_normal_initializer(stddev=0.1))
# 目标数据维度:1x8
v2 = tf.reshape(v1, [1, 8])
with tf.Session() as sess:
	init_op = tf.global_variables_initializer()
	sess.run(init_op)
	print("Source data: {}".format(sess.run(v1)))
	print("Source data shape: {}".format(v1.shape))
	print("Reshape data shape: {}".format(v2.shape))
  • Result
Source data: [[-0.13566928 -0.00425626  0.03604111  0.18139043]
 [-0.06290253  0.04224857 -0.00818029 -0.00809968]]
Source data shape: (2, 4)
Reshape data shape: (1, 8)
  • Analysis
    将维度为2x4的数据转为1x8维度的数据.

3.2.2 使用-1代替具体维度

import tensorflow as tf

def reshape_negative():
   data = tf.constant([1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6], dtype=tf.float32)
   reshape = tf.reshape(data, [2, -1]) 
   with tf.Session() as sess:
       data, reshape = sess.run([data, reshape])
       print("shape of data: {}".format(data.shape))
       print("shape of reshape data: {}".format(reshape.shape))
       
def reshape_negative_1():
   data = tf.constant([1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6], dtype=tf.float32)
   reshape = tf.reshape(data, [-1, 6])
   with tf.Session() as sess:
       data, reshape = sess.run([data, reshape])
       print("shape of data: {}".format(data.shape))
       print("shape of reshape data: {}".format(reshape.shape)) 
           
def reshape_negative_2():
   data = tf.constant([1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6], dtype=tf.float32)
   reshape = tf.reshape(data, [2, -1, 3])
   with tf.Session() as sess:
       data, reshape = sess.run([data, reshape])
       print("shape of data: {}".format(data.shape))
       print("shape of reshape data: {}".format(reshape.shape))         
       

reshape_negative_2() 
  • Result
shape of data: (18,)
shape of reshape data: (2, 9)
shape of data: (18,)
shape of reshape data: (3, 6)
shape of data: (18,)
shape of reshape data: (2, 3, 3)
  • Analysis
    shape参数中有-1时,用于补充缺失的维度,如上源数据维度为18x1,使用[2, -1],此时-1表示9;分解详见下表:
序号源数据维度位置结果维度
118x1[2, -1]2x9,-1表示9
218x1[-1, 6]3x6, -1表示3
318x1[2, -1, 3]2x3x3, -1表示3

4 tf.equal(x, y)

功能:比较两个参数(矩阵或向量)x和y是否相等,相等返回True,否则返回False.

  • Demo
v1 = tf.constant([1.0, 2.0, 3.0])
v2 = tf.constant([1.0, 2.0, 4.0])
out = tf.equal(v1, v2)
with tf.Session() as sess:
	init_op = tf.global_variables_initializer()
	sess.run(init_op)
	print("Compare result: {}".format(sess.run(out)))
  • Result
Compare result: [ True  True False]
  • Analysis
    相同为True,不同为False

5 tf.control_dependencies(control_inputs)

功能:控制计算流图的先后顺序.
注意:先完成control_input的计算,才能执行后续的语句.

  • Demo
import tensorflow as tf
with tf.control_dependencies([train_step, varaibales_averages_op]):
	# 空操作
	train_op = tf.no_op(name='train')
  • Analysis
    tensorflow是顺序执行的,但是在实际训练中,源数据是通过BATCH_SIZE来计算的,因此需要不断刷新前面定义的变量,采用control_dependencies可以确保control_input在被刷新之后,继续执行定义的内容,保证计算顺序的正确性.

6 tf.nn.conv2d()

2维卷积计算:输入和滑窗为4维数据.

tf.nn.conv2d(
    input,
    filter,
    strides,
    padding,
    use_cudnn_on_gpu=True,
    data_format='NHWC',
    dilations=[1, 1, 1, 1],
    name=None
)

【参数列表】

序号参数描述
1input张量,数据格式为half,bfloat16, float32, float32,若为4-D数据,根据data_format设置格式,参见下面参数
2filter张量,滑窗信息,四维数据,分别为[filter_height, filter_width, in_channels, out_channels]
3strides整型列表,一维列表有4个元素,参数顺序参见data_format
4padding字符串,SAME自动填充输入矩阵,VALID不使用填矩阵
5use_cudnn_on_gpu可选布尔,默认为True,使用GPU并行计算
6data_format可选字符串,默认为NHWC,另一个为NCHW,指定输入输出数据格式,NCHW表示[batch, channels, height, width]
7dilations整型数据列表,默认[1, 1, 1, 1]
8name可选,操作名称

[参考文献]
[1]https://tensorflow.google.cn/api_docs/python/tf


更新ing
  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值