1.most TensorFlow APIs are usable with eager execution.
2.Most layers take as a first argument the number of output dimsensions/ channels.
- The number of input dimension is often unnecessary,as it can be inferred the first time the layer is used,
but it can be provided if you want to specify it manually, which is useful in some complex models.
4.you can inspect all variables in a layer using ‘layer.variables’
trainable variables layer.trainable_variables
layer.kernel
layer.bias
5.The best way to implement your own layer is extending the tf.keras.Layer class and implementing:
1.’_init’, where you can do all input-independent initialization
2.build, where you known the shapes of the input tensors and can do the rest of the initialization
3.call, where you do the forward computation.
6.Note that you do not have to wait unit buld is called to create your variables,
you can also create them in init
However, the advantage of creating in build is that:
jt enables late variable creation based on the shape of the inputs the layer will operate on .
On the other hand, creating variables in init would mean that shapes required to create the variables will need to be explicitly specified.
- Model.fit, Model.evaluate, Model.save.
- In addition to tracking variables, a keras.Model also tracking its internal layers, making them easier to inspect.