输入层 InputLayer
class InputLayer(layers.Layer):
def __init__(self, units=784):
super(InputLayer, self).__init__()
self.units = units
def call()
return inputs
InputLayer 类没有什么实际功能,就是简单地将输入再输出。keras建议继承Layer类实现__init__()
函数以及父类中的bulid()
和call()
函数。__init__()
函数用于本层网络的配置(例如单元数量);build()
函数用于规划本层网络的逻辑;call()
函数用于对输入的张量执行本网络层的逻辑。
隐藏层 HiddenLayer
class HiddenLayer(layers.Layer):
# 定义构造函数
def __init__(self, units):
super(HiddenLayer, self).__init__()
self.units = units
# 重写build函数
def build(self, input_shape):
# 初始化隐藏层的权重
w_init = tf.random_normal_initializer()
self.w = tf.Variable(initial_value=w_init(shape=(self.units,), dtype='float32'), trainable=True)
# 初始化隐藏层的偏置
b_init = tf.zeros.initializer()
self.b = tf.Variable(initial_value=b_init(shape=(self.units,), dtype='float32'), trainable=True)
```
向层里添加权重和偏置也可以使用Layer类的add_weight()函数,函数如下(Variable和add_weight类随意选,看个人习惯)
add_weight(name=None, shape=None, dtype=None, initializer=None, regularizer=None, trainable=None, constraint=None, partitioner=tf.VariableSynchronization.AUTO, aggregation=tf.compat.v1.VariableAggregation.NONE, **kwargs)
```
# 重写call函数
def call(self, inputs):
output = tf.matmul(inputs, self.w)+self.b
regularizer_l2 = tf.keras.regularizer.l2(.01)
# Layer类的add_loss函数作用是汇总这层的损失值
self.add_loss(regularizer_l2(self.w))
return output
如果需要对self.w
计算L2正则化,并将其作为要降低的损失的一部分,那么就将regularizer_l2(self.w)
传递给Layer类的add_loss()
函数。
输出层 OutputLayer
class OutputLayer(layers.Layer):
# 定义构造函数
def __init__(self, units):
super(OutputLayer, self).__init__()
self.units = units
# 重写build函数
def build(self, input_shape):
# 初始化隐藏层的权重
w_init = tf.random_normal_initializer()
self.w = tf.Variable(initial_value=w_init(shape=(self.units,), dtype='float32'), trainable=True)
# 初始化隐藏层的偏置
b_init = tf.zeros.initializer()
self.b = tf.Variable(initial_value=b_init(shape=(self.units,), dtype='float32'), trainable=True)
```
向层里添加权重和偏置也可以使用Layer类的add_weight()函数,函数如下(Variable和add_weight类随意选,看个人习惯)
add_weight(name=None, shape=None, dtype=None, initializer=None, regularizer=None, trainable=None, constraint=None, partitioner=tf.VariableSynchronization.AUTO, aggregation=tf.compat.v1.VariableAggregation.NONE, **kwargs)
```
# 重写call函数
def call(self, inputs):
output = tf.matmul(inputs, self.w)+self.b
regularizer_l2 = tf.keras.regularizer.l2(.01)
# Layer类的add_loss函数作用是汇总这层的损失值
self.add_loss(regularizer_l2(self.w))
return output
模型类 MLPBlock
创建MLPBlock类继承Model类,需要重写的是__init__()
和call()
函数。
class MLPBlock(tf.keras.Model):
def __init__(self):
super(MLPBlock, self).__init__()
self.input_layer = InputLayer(784)
self.hidden_layer = HiddenLayer(500)
self.output_layer = OutputLayer(10)
self.sofrmax = layers.Softmax()
def cal(self, input):
x = self.input_layer(input)
x = self.hidden_layer(x)
x = tf.nn.relu(x)
x = self.output_layer(x)
x = self.softmax(x)
return x