列举几个常用的方法:
1.tf.matmul
matmul(
a,
b,
transpose_a=False,
transpose_b=False,
adjoint_a=False,
adjoint_b=False,
a_is_sparse=False,
b_is_sparse=False,
name=None
)
Multiplies matrix a
by matrix b
, producing a
* b
.
The inputs must be matrices (or tensors of rank > 2, representing batches ofmatrices), with matching inner dimensions, possibly after transposition.
Both matrices must be of the same type. The supported types are:float16
, float32
, float64
, int32
, complex64
, complex128
.
Either matrix can be transposed or adjointed (conjugated and transposed) onthe fly by setting one of the corresponding flag to True
. These are False
by default.
If one or both of the matrices contain a lot of zeros, a more efficientmultiplication algorithm can be used by setting the correspondinga_is_sparse
or b_is_sparse
flag to True
. These are False
by default.This optimization is only available for plain matrices (rank-2 tensors) withdatatypes bfloat16
or float32
.
2.tf.nn
The activation ops provide different types of nonlinearities for use in neuralnetworks. These include smooth nonlinearities (sigmoid
, tanh
, elu
,softplus
, and softsign
), continuous but not everywhere differentiablefunctions (relu
, relu6
, crelu
and relu_x
), and random regularization(dropout
).
提供几个不同的非线性激活函数。
3.tf.nn.relu
relu(
features,
name=None
)
ReLU激活函数 f(x) = max(x,0)
4.tf.sigmoid
sigmoid(
x,
name=None
)
Computes sigmoid of x
element-wise.
Specifically, y = 1 / (1 + exp(-x))
.
5.tf.tanh
tanh(
x,
name=None
)
f(x) = (1-e^(-2x)) / (1+e^(-2x))