TensorFlow2.0教程-用keras构建自己的网络层
Tensorflow 2.0 教程持续更新: https://blog.csdn.net/qq_31456593/article/details/88606284
TensorFlow 2.0 教程- Keras 快速入门
TensorFlow 2.0 教程-keras 函数api
TensorFlow 2.0 教程-使用keras训练模型
TensorFlow 2.0 教程-用keras构建自己的网络层
TensorFlow 2.0 教程-keras模型保存和序列化
TensorFlow 2.0 教程-eager模式
TensorFlow 2.0 教程-Variables
TensorFlow 2.0 教程–AutoGraph
TensorFlow 2.0 深度学习实践
TensorFlow2.0 教程-图像分类
TensorFlow2.0 教程-文本分类
TensorFlow2.0 教程-过拟合和欠拟合
完整tensorflow2.0教程代码请看tensorflow2.0:中文教程tensorflow2_tutorials_chinese(欢迎star)
1.构建一个简单的网络层
from __future__ import absolute_import, division, print_function
import tensorflow as tf
tf.keras.backend.clear_session()
import tensorflow.keras as keras
import tensorflow.keras.layers as layers
# 定义网络层就是:设置网络权重和输出到输入的计算过程
class MyLayer(layers.Layer):
def __init__(self, input_dim=32, unit=32):
super(MyLayer, self).__init__()
w_init = tf.random_normal_initializer()
self.weight = tf.Variable(initial_value=w_init(
shape=(input_dim, unit), dtype=tf.float32), trainable=True)
b_init = tf.zeros_initializer()
self.bias = tf.Variable(initial_value=b_init(
shape=(unit,), dtype=tf.float32), trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.weight) + self.bias
x = tf.ones((3,5))
my_layer = MyLayer(5, 4)
out = my_layer(x)
print(out)
tf.Tensor(
[[0.06709253 0.06818779 0.09926171 0.0179923 ]
[0.06709253 0.06818779 0.09926171 0.0179923 ]
[0.06709253 0.06818779 0.09926171 0.0179923 ]], shape=(3, 4), dtype=float32)
按上面构建网络层,图层会自动跟踪权重w和b,当然我们也可以直接用add_weight的方法构建权重
class MyLayer(layers.Layer):
def __init__(self, input_dim=32, unit=32):
super(MyLayer, self).__init__()
self.weight = self.add_weight(shape=(input_dim, unit),
initializer=keras.initializers.RandomNormal(),
trainable=True)
self.bias = self.add_weight(shape=(unit,),
initializer=keras.initializers.Zeros(),
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.weight) + self.bias
x = tf.ones((3,5))
my_layer = MyLayer(5, 4)
out = my_layer(x)
print(out)
tf.Tensor(
[[-0.10401802 -0.05459599 -0.08195674 0.13151655]
[-0.10401802 -0.05459599 -0.08195674 0.13151655]
[-0.10401802 -0.05459599 -0.08195674 0.13151655]], shape=(3, 4), dtype=float32)
也可以设置不可训练的权重
class AddLayer(layers.Layer):
def __init__(self, input_dim=32):
super(AddLayer, self).__init__()
self.sum = self.add_weight(shape=(input_dim,)