(Keras) 自定义subclass子类模型/层,序列化以及model.save(path)保存

目录

        23.08.09

         自定义LayerNormalization层

        对于传递给子类模型中__int__的自定义层和模型调用

         23.08.10

        使用构造函数def construct(cla, class_attribute1,         class_attribute2)构造序列化模型


        23.08.09

        针对自定义的subclass子类模型不能使用model.save()保存整个模型的问题,通过将模型序列化实现整个模型保存。

        在编写keras自定义层或模型的过程中,只需要定义代码的逻辑,但在导出整个模型时,需要将代码转换为一个平面文件(Flat File)。自定义层或模型的代码在会在模型导出时丢失,因此需要传递所有构造函数,才能进行正确save与load。

        对于自定义的层和类,必须使用get_config()方法,如果传递给自定义对象的构造函数(__int__()方法)不是常见的python对象(整数、字符串等基本类型),还必须在form_config()中显式反序列化这些参数类方法。
1. get_config() 应返回一个 JSON 可序列化字典,以便与 Keras 架构和模型保存 API 兼容。
2 .from_config(config) (classmethod)应该返回从配置创建的新层或模型对象。 默认实现返回 cls(**config)。

@tf.keras.utils.register_keras_serializable(package="My_LayerNormalization")
class LayerNormalization(tf.keras.layers.Layer):
    def __init__(self, epsilon=1e-6, **kwargs):
        # super(LayerNormalization, self).__init__(**kwargs)
        self.eps = epsilon
        super(LayerNormalization, self).__init__(**kwargs)

    def build(self, input_shape):
        self.gamma = self.add_weight(name='gamma', shape=input_shape[-1:],
                                     initializer=tf.ones_initializer(), trainable=True)
        self.beta = self.add_weight(name='beta', shape=input_shape[-1:],
                                    initializer=tf.zeros_initializer(), trainable=True)
        super(LayerNormalization, self).build(input_shape)

    def call(self, x):
        mean = tf.keras.backend.mean(x, axis=-1, keepdims=True)
        std = tf.keras.backend.std(x, axis=-1, keepdims=True)
        return self.gamma * (x - mean) / (std + self.eps) + self.beta

    def get_config(self):
        config = super().get_config()
        config.update({
            'epsilon': self.eps,
        })
        return config

         自定义LayerNormalization

1. 添加@tf.keras.utils.register_keras_serializable装饰器,使自定义层或模型可以序列化或反序列化。

        序列化:keras将自定义层或模型序列化为JSON格式后,即可用model.save()保存整个模型结构。

2. class LayerNormalization(tf.keras.layers.Layer):

        继承类,来自tf.keras.layers.Layer。
3. def __init__(self, epsilon=1e-6, **kwargs):

        self.eps = epsilon => 保存自定义参数 => get_config(),将构造参数保存为实例字段。

4. def build(self, input_shape):

5. def get_config(self):
        config = super().get_config()
        config.update({
            'epsilon': self.eps,
        })
        return config

        使用get_config保存构造参数。

        对于传递给子类模型中__int__的自定义层和模型调用

        必须显式的进行反序列化。

@keras.saving.register_keras_serializable(package="ComplexModels")
# keras.saving不能用的可以写tf.compat.v1.keras.saving......
class CustomModel(keras.layers.Layer):
    def __init__(self, first_layer, second_layer=None, **kwargs):
        super().__init__(**kwargs)
        self.first_layer = first_layer
        if second_layer is not None:
            self.second_layer = second_layer
        else:
            self.second_layer = keras.layers.Dense(8)

    def get_config(self):
        config = super().get_config()
        config.update(
            {
                "first_layer": self.first_layer,
                "second_layer": self.second_layer,
            }
        )
        return config

    @classmethod
    def from_config(cls, config):
        # Note that you can also use `keras.saving.deserialize_keras_object` here
        config["first_layer"] = keras.layers.deserialize(config["first_layer"])
        config["second_layer"] = keras.layers.deserialize(config["second_layer"])
        return cls(**config)

    def call(self, inputs):
        return self.first_layer(self.second_layer(inputs))


# Let's make our first layer the custom layer from the previous example (MyDense)
inputs = keras.Input((32,))
outputs = CustomModel(first_layer=layer)(inputs)
model = keras.Model(inputs, outputs)

config = model.get_config()
new_model = keras.Model.from_config(config)

首先自定义的层需要在get_config中上传;其次,需要在类方法@classmethod

@classmethod # 类方法
def form_config(cls, config)
    config["name_of_customlayer1"] =     
             keras.layer.deserialize(config["name_of_customlayer1"])
    config["name_of_customlayer2"] =     
             keras.layer.deserialize(config["name_of_customlayer2"])
    # 实现自定义层的反序列化
    return cls(**config)

         其中,cls(**config)构建了一的类的实例,cls表示定义这个类本身,引用了正在操作或被构建的类,在上段代码中为CustomModel;**config是一个关键字参数,代表构造对象所需要的属性和值(键值对)。

         23.08.10

        使用构造函数def construct(cla, class_attribute1,         class_attribute2)构造序列化模型

import keras.layers
import tensorflow as tf
import keras
import time
import os
from matplotlib import pyplot as plt
from IPython import display
from keras.utils.vis_utils import plot_model
from scipy.io import loadmat
from keras.models import Model
import keras.layers as layers

os.environ["KERAS_BACKEND"] = "tensorflow"  # keras后端使用tensorflow框架运算
os.environ["TF_FORCE_GPU_ALLOW_GROWTH"] = "true"  # 逐渐增加GPU显存的使用
os.environ["TF_CP_MIN_LOG_LEVEL"] = "2"  # 只显示 warning 和 error
from PIL import Image
from tensorflow.python.keras.utils.np_utils import to_categorical
import matplotlib


# 加载数据
from keras.datasets import mnist

(train_imgs, train_labels), (test_imgs, test_labels) = mnist.load_data()

# 灰度归一化
train_imgs = train_imgs / 255
test_imgs = test_imgs / 255

# 调整维度,按照 28*28 大小拆分图片数据
train_imgs = train_imgs.reshape(-1, 28, 28, 1)
test_imgs = test_imgs.reshape(-1, 28, 28, 1)

# 调整标签为onehot
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)


@tf.keras.utils.register_keras_serializable(package="Conv", name="Conv11")
class Conv11(keras.layers.Layer):
    def __init__(self, units, num_label, conv1, conv2, fla, output_layer, **kwargs):
        super(Conv11, self).__init__(**kwargs)
        self.units = units
        self.num_label = num_label
        self.conv1 = conv1
        self.conv2 = conv2
        self.fla = fla
        self.output_layer = output_layer

    @classmethod
    def construct(cls, units, num_label):
        return cls(
            units=units,
            num_label=num_label,
            conv1=layers.Conv2D(filters=units, kernel_size=(3, 3), padding='same'),
            conv2=layers.Conv2D(filters=units * 2, kernel_size=(3, 3), padding='same'),
            fla=layers.Flatten(),
            output_layer=layers.Dense(units=num_label, activation='softmax')
        )

    def get_config(self):
        config = super(Conv11, self).get_config()
        config.update({"units": self.units,
                       "num_label": self.num_label,
                       "conv1": self.conv1,
                       "conv2": self.conv2,
                       "fla": self.fla,
                       "output_layer": self.output_layer})
        return config

    @classmethod
    def form_config(cls, config):
        return cls(**config)

    def call(self, inputs, **kwargs):
        x_conv = self.conv1(inputs)
        x_conv = self.conv2(x_conv)
        x_conv = self.fla(x_conv)
        output = self.output_layer(x_conv)

        return output


layer_conv_2 = Conv11.construct(3, 10)
model = tf.keras.Sequential([layer_conv_2])
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.1), loss=tf.keras.losses.CategoricalCrossentropy(from_logits=False), metrics=['categorical_accuracy'])

# 训练模型
history = model.fit(train_imgs, train_labels, epochs=1, batch_size=64)
print('*************\n')
# print(model.trainable_weights)
print('*************\n')
model.evaluate(x=test_imgs, y=test_labels)
# 保存模型
model.save('test_model.h5')
# 加载模型
mdoel_1 = keras.models.load_model('test_model.h5')
mdoel_1.evaluate(test_imgs, test_labels)

输出: 使用model.save保存为.h5文件后,使用keras.models.load_model(path)可实现整个模型的加载。可见重新加载模型与训练模型在测试集上结果一样,且不报错。

938/938 [==============================] - 4s 2ms/step - loss: 1.5829 - categorical_accuracy: 0.8487


313/313 [==============================] - 1s 1ms/step - loss: 0.4335 - categorical_accuracy: 0.8664
313/313 [==============================] - 0s 1ms/step - loss: 0.4335 - categorical_accuracy: 0.8664

参考:
How to write a Custom Keras model so that it can be deployed for Serving | by Lak Lakshmanan | Towards Data Sciencehttps://towardsdatascience.com/how-to-write-a-custom-keras-model-so-that-it-can-be-deployed-for-serving-7d81ace4a1f8

Making new layers and models via subclassing (keras.io)https://keras.io/guides/making_new_layers_and_models_via_subclassing/
Saving nested layers in TensorFlow | by Oliver K. Ernst, Ph.D. | Practical coding | Mediumhttps://medium.com/practical-coding/saving-nested-layers-in-tensorflow-6d85cd11159b

Save, serialize, and export models (keras.io)https://keras.io/guides/serialization_and_saving/

  • 4
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 4
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值