5.数学运算
运算类型
- element-wise 元素相关
-
-
-
- 、
-
-
- matrix-wise 矩阵相关
- @ matmul
- dim-wise 维度相关
- reduce_mean/ max/ min/ sum
一. element-wise 元素相关
+ - * / % //
In [47]: a=tf.ones([2,2])
In [49]: b=tf.fill([2,2],2.)
In [49]: b=tf.fill([2,2],2.)
In [50]: a+b,a-b,a*b,a/b # 注意:dtype类型
Out[50]:
(<tf.Tensor: id=139, shape=(2, 2), dtype=float32, numpy=
array([[3., 3.],
[3., 3.]], dtype=float32)>,
<tf.Tensor: id=140, shape=(2, 2), dtype=float32, numpy=
array([[-1., -1.],
[-1., -1.]], dtype=float32)>,
<tf.Tensor: id=141, shape=(2, 2), dtype=float32, numpy=
array([[2., 2.],
[2., 2.]], dtype=float32)>,
<tf.Tensor: id=142, shape=(2, 2), dtype=float32, numpy=
array([[0.5, 0.5],
[0.5, 0.5]], dtype=float32)>)
In [51]: b//a,b%a
Out[51]:
(<tf.Tensor: id=143, shape=(2, 2), dtype=float32, numpy=
array([[2., 2.],
[2., 2.]], dtype=float32)>,
<tf.Tensor: id=144, shape=(2, 2), dtype=float32, numpy=
array([[0., 0.],
[0., 0.]], dtype=float32)>)
tf.math.log以自然数e为底
In [57]: a=tf.fill([2,2],3.)
In [58]: tf.math.log(a)
Out[58]:
<tf.Tensor: id=160, shape=(2, 2), dtype=float32, numpy=
array([[1.0986123, 1.0986123],
[1.0986123, 1.0986123]], dtype=float32)>
log没有其他为底的API
log2, log10
实现以2,10为底:
In [62]: tf.math.log(8.)/tf.math.log(2.) #以2为低,要是float类型的
Out[62]: <tf.Tensor: id=168, shape=(), dtype=float32, numpy=3.0>
tf.exp(x)
计算x元素的指数y = e^x
In [64]: a
Out[64]:
<tf.Tensor: id=159, shape=(2, 2), dtype=float32, numpy=
array([[3., 3.],
[3., 3.]], dtype=float32)>
In [65]: tf.exp(a)
Out[65]:
<tf.Tensor: id=172, shape=(2, 2), dtype=float32, numpy=
array([[20.085537, 20.085537],
[20.085537, 20.085537]], dtype=float32)>
tf.pow(x,y) 次方 && tf.sqrt(b) 开方
二,matrix-wise矩阵
@ ,matmul 乘法
[b, 3, 4] @[b, 4, 5] = [b, 3, 5]矩阵的并行运算,b个[3, 4]与b个[4, 5]相乘
In [76]: a=tf.ones([2,3])
In [77]: b=tf.fill([3,4],2.)
In [78]: a@b
Out[78]:
<tf.Tensor: id=195, shape=(2, 4), dtype=float32, numpy=
array([[6., 6., 6., 6.],
[6., 6., 6., 6.]], dtype=float32)>
In [79]: tf.matmul(a,b)
Out[79]:
<tf.Tensor: id=196, shape=(2, 4), dtype=float32, numpy=
array([[6., 6., 6., 6.],
[6., 6., 6., 6.]], dtype=float32)>
In [72]: m=tf.ones([4,2,3])
In [73]: n=tf.fill([4,3,5],2.)
In [74]: m@n
Out[74]:
<tf.Tensor: id=187, shape=(4, 2, 5), dtype=float32, numpy=
array([[[6., 6., 6., 6., 6.],
[6., 6., 6., 6., 6.]],
[[6., 6., 6., 6., 6.],
[6., 6., 6., 6., 6.]],
[[6., 6., 6., 6., 6.],
[6., 6., 6., 6., 6.]],
[[6., 6., 6., 6., 6.],
[6., 6., 6., 6., 6.]]], dtype=float32)>
In [75]: tf.matmul(m,n)
Out[75]:
<tf.Tensor: id=188, shape=(4, 2, 5), dtype=float32, numpy=
array([[[6., 6., 6., 6., 6.],
[6., 6., 6., 6., 6.]],
[[6., 6., 6., 6., 6.],
[6., 6., 6., 6., 6.]],
[[6., 6., 6., 6., 6.],
[6., 6., 6., 6., 6.]],
[[6., 6., 6., 6., 6.],
[6., 6., 6., 6., 6.]]], dtype=float32)>
三,dim-wise维度运算,
求mean/max/min/sum
四,With broadcasting
[4, 2, 3]与[ 3, 5]相乘
先将[3,5]变成[4, 3, 5]再进行
In [80]: a=tf.random.normal([4,2,3])
In [81]: b=tf.random.normal([3,5])
In [82]: a.shape,b.shape
Out[82]: (TensorShape([4, 2, 3]), TensorShape([3, 5]))
In [83]: bb=tf.broadcast_to(b,[4,3,5])
In [85]: a@bb
Out[85]:
<tf.Tensor: id=212, shape=(4, 2, 5), dtype=float32, numpy=
五,Recap 扼要重述
实现: ? = ?@? + ?
In [86]: x=tf.ones([4,2])
In [88]: W=tf.ones([2,1])
In [89]: b=tf.constant(0.1)
In [90]: x@W+b
Out[90]:
<tf.Tensor: id=221, shape=(4, 1), dtype=float32, numpy=
array([[2.1],
[2.1],
[2.1],
[2.1]], dtype=float32)>
实现:??? = ????(?@? + ?)
In [94]: tf.nn.relu(x@W+b)
Out[94]:
<tf.Tensor: id=227, shape=(4, 1), dtype=float32, numpy=
array([[2.1],
[2.1],
[2.1],
[2.1]], dtype=float32)>
tf.nn.relu()
tf.nn.relu()函数目的是将输入小于0的值福值为0,输入大于0的值不变