keras概述

准备

!pip3 install tensorflow==2.0.0a0
!pip3 install pyyaml
%matplotlib inline
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: tensorflow==2.0.0a0 in /usr/local/lib/python3.7/site-packages (2.0.0a0)
Requirement already satisfied: keras-preprocessing>=1.0.5 in /usr/local/lib/python3.7/site-packages (from tensorflow==2.0.0a0) (1.0.9)
Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.7/site-packages (from tensorflow==2.0.0a0) (1.19.0)
Requirement already satisfied: tb-nightly<1.14.0a20190302,>=1.14.0a20190301 in /usr/local/lib/python3.7/site-packages (from tensorflow==2.0.0a0) (1.14.0a20190301)
Requirement already satisfied: gast>=0.2.0 in ./Library/Python/3.7/lib/python/site-packages (from tensorflow==2.0.0a0) (0.2.2)
Requirement already satisfied: astor>=0.6.0 in ./Library/Python/3.7/lib/python/site-packages (from tensorflow==2.0.0a0) (0.7.1)
Requirement already satisfied: termcolor>=1.1.0 in ./Library/Python/3.7/lib/python/site-packages (from tensorflow==2.0.0a0) (1.1.0)
Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.7/site-packages (from tensorflow==2.0.0a0) (0.33.1)
Requirement already satisfied: google-pasta>=0.1.2 in /usr/local/lib/python3.7/site-packages (from tensorflow==2.0.0a0) (0.1.4)
Requirement already satisfied: tf-estimator-nightly<1.14.0.dev2019030116,>=1.14.0.dev2019030115 in /usr/local/lib/python3.7/site-packages (from tensorflow==2.0.0a0) (1.14.0.dev2019030115)
Requirement already satisfied: six>=1.10.0 in ./Library/Python/3.7/lib/python/site-packages (from tensorflow==2.0.0a0) (1.12.0)
Requirement already satisfied: protobuf>=3.6.1 in /usr/local/lib/python3.7/site-packages (from tensorflow==2.0.0a0) (3.7.0)
Requirement already satisfied: keras-applications>=1.0.6 in /usr/local/lib/python3.7/site-packages (from tensorflow==2.0.0a0) (1.0.7)
Requirement already satisfied: numpy<2.0,>=1.14.5 in /usr/local/lib/python3.7/site-packages (from tensorflow==2.0.0a0) (1.16.2)
Requirement already satisfied: absl-py>=0.7.0 in ./Library/Python/3.7/lib/python/site-packages (from tensorflow==2.0.0a0) (0.7.0)
Requirement already satisfied: werkzeug>=0.11.15 in ./Library/Python/3.7/lib/python/site-packages (from tb-nightly<1.14.0a20190302,>=1.14.0a20190301->tensorflow==2.0.0a0) (0.14.1)
Requirement already satisfied: markdown>=2.6.8 in ./Library/Python/3.7/lib/python/site-packages (from tb-nightly<1.14.0a20190302,>=1.14.0a20190301->tensorflow==2.0.0a0) (3.0.1)
Requirement already satisfied: setuptools in /usr/local/lib/python3.7/site-packages (from protobuf>=3.6.1->tensorflow==2.0.0a0) (40.8.0)
Requirement already satisfied: h5py in ./Library/Python/3.7/lib/python/site-packages (from keras-applications>=1.0.6->tensorflow==2.0.0a0) (2.9.0)
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: pyyaml in /usr/local/lib/python3.7/site-packages (5.1)

导入tf.keras

tf.keras(除非特殊说明,本文在非代码中使用tf.keras表示tensorflowkeras,而直接使用keras则表示keras标准)是Tensorflow基于keras标准实现的模块,该模块提供了高层的API用于快速构建和训练模型,并且对Tensorflow中的eager执行器、tf.dataEstimators提供了支持。可以说tf.keras在不损失性能的前提下简化了Tensorflow的使用。
使用tf.keras,导入下面的模块即可。

from tensorflow import keras

tf.keras可以执行任意与Keras相符合的代码,但是需要注意下面两点:

  1. 最新的Tensorflow中的tf.keras版本可能和PyPIkeras的版本不一样
  2. 当使用tf.keras保存模型的时候,默认的格式是tfcheckpoint格式,如果指定save_format="h5"才会保存HDF5格式

创建简单模型

顺序模型

Keras中,你可以通过收集不同的层创建模型,通常一个模型就是基于层构建的计算图。最为常见的模型就是通过堆叠不同的层来构建,此时就可以使用tf.keras.Sequential
这里创建一个简单的全连接网络(多层感知器)

model = keras.Sequential()
# 添加一个64个神经元的全连接层
model.add(keras.layers.Dense(64, activation='relu'))
# 再次添加
model.add(keras.layers.Dense(64, activation='relu'))
# 添加一个10个神经元的softmax
model.add(keras.layers.Dense(10, activation='softmax'))

可以发现,使用tf.kerasAPI创建一个网络真的十分简单

tf.keras.layers中实现了常见的层:全连接Dense、卷积Conv2D、池化MaxPool2D等等,其他文章我会详细介绍。
这里简单介绍几个在tf.keras.layers中常用到的参数:

  1. activation指定了激活函数对象,该参数可以是一个可调用函数或者一个字符串(最后还是转换为可调用函数)(字符串和函数的区别是,使用字符串无法指定函数的具体参数,只能使用默认的),默认是没有激活函数
  2. kernel_initializerbias_initializer,这两个分别指定了权重和偏置的初始化函数对象,同样可以是一个可调用函数或者一个字符串,默认情况下是Glorot uniform
  3. kernel_regularizerbias_regularizer,提供了权重和偏置的正则化函数对象,默认是没有

下面示例代码演示和如何使用上面的参数:

# sigmoid激活
keras.layers.Dense(10, activation='sigmoid')
# 或者
keras.layers.Dense(10, activation=keras.activations.sigmoid)
# 对权重使用0.01的L1正则化
keras.layers.Dense(10, kernel_regularizer=keras.regularizers.l1(0.01))
# 对偏置使用0.01的L2正则化
keras.layers.Dense(10,bias_regularizer=keras.regularizers.l2(0.01))
# 对权重使用orthogonal进行初始化
keras.layers.Dense(10,kernel_initializer='orthogonal')
# 对偏置使用常量2.0进行初始化
keras.layers.Dense(10,bias_initializer=keras.initializers.Constant(2.0))
<tensorflow.python.keras.layers.core.Dense at 0x1090e4e10>

训练和评估

训练前准备

当模型构建完成后,可以通过调用compile方法,指定网络的训练配置

model = keras.Sequential([
    # 添加一个神经元个数为64的全连接,并指定输入为尺寸
    keras.layers.Dense(64, activation='relu', input_shape=(32, )), 
    # 再添加一层
    keras.layers.Dense(64, activation='relu'), 
    # 添加softmax
    keras.layers.Dense(10, activation='softmax')])

model.compile(optimizer=keras.optimizers.Adam(0.001), 
                        loss='categorical_crossentropy', 
                        metrics=['accuracy'])

tf.keras.Model.compile需要3个重要的参数:

  • optimizer,该参数指定了训练使用的优化函数对象,该对象从tf.keras.optimizers模块获得,比如说可以使用tf.keras.optimizers.Adam以及tf.keras.optimizers.SGD,如果仅使用默认参数,也可以使用字符串'adam''sgd'
  • loss,指定了loss的计算方式,tf.keras.losses模块提供了很多常用的loss函数对象:均方误差'mse''categorical_crossentropy''binary_corssentropy',当然也可以使用自己编写的loss对象(其他文章详细介绍)
  • metrics,指定了观察函数对象,用于在训练的过程中输出训练的监测,可以通过list指定多个。tf.keras.metrics模块提供了很多常用的,比如'accuracy'则表示准确率。也可以指定自己实现的(其他文章详细介绍)
  • 另外,由于tf 2.0默认使用eager执行器,如果你使用1.x的版本,并且希望使用eager执行器,可以指定参数run_eagerly=True

下面是两个简单的配置训练参数的例子:

# 这里的模型使用上面例子中的模型
model.compile(optimizer=keras.optimizers.Adam(0.01), 
                        loss='mse',                # mean squared error 
                        metrics=['mae'])        # mean absolute error    

model.compile(optimizer=keras.optimizers.RMSprop(0.01), 
                        loss=keras.losses.CategoricalCrossentropy(), 
                        metrics=[keras.metrics.CategoricalAccuracy()])

训练NumPy数据

对于小数据集,训练和评估模型时可以使用NumPy直接将数据集全部存储到内存中,然后调用fit方法来拟合训练数据。

import numpy as np

data = np.random.random((1000, 32))
labels = np.random.random((1000, 10))
# 这里的模型使用上面例子中的模型
model.fit(data, labels, epochs=10, batch_size=32)
Epoch 1/10
1000/1000 [==============================] - 0s 253us/sample - loss: 269.3968 - categorical_accuracy: 0.1160
Epoch 2/10
1000/1000 [==============================] - 0s 64us/sample - loss: 1041.5541 - categorical_accuracy: 0.1020
Epoch 3/10
1000/1000 [==============================] - 0s 78us/sample - loss: 2284.6309 - categorical_accuracy: 0.0930
Epoch 4/10
1000/1000 [==============================] - 0s 82us/sample - loss: 3662.9025 - categorical_accuracy: 0.1110
Epoch 5/10
1000/1000 [==============================] - 0s 81us/sample - loss: 5483.3689 - categorical_accuracy: 0.0910
Epoch 6/10
1000/1000 [==============================] - 0s 76us/sample - loss: 7620.5808 - categorical_accuracy: 0.0830
Epoch 7/10
1000/1000 [==============================] - 0s 67us/sample - loss: 9934.6391 - categorical_accuracy: 0.0920
Epoch 8/10
1000/1000 [==============================] - 0s 62us/sample - loss: 12510.4762 - categorical_accuracy: 0.1150
Epoch 9/10
1000/1000 [==============================] - 0s 61us/sample - loss: 15338.1129 - categorical_accuracy: 0.1050
Epoch 10/10
1000/1000 [==============================] - 0s 61us/sample - loss: 18830.0749 - categorical_accuracy: 0.0990





<tensorflow.python.keras.callbacks.History at 0x12c55dda0>

tf.keras.Model.fit方法需要几个重要的参数:

  • 第一个和第二个分别是训练的数据和标签
  • epochs,指定训练集训练的次数
  • batch_size,指定批的大小
  • validation_datatuple的形式指定用于评估的数据和标签,评估会在每个epoch训练完成后执行

下面是一个带评估的例子:

data = np.random.random((1000, 32))
labels = np.random.random((1000, 10))
val_data = np.random.random((100, 32))
val_labels = np.random.random((100, 10))

# 这里的模型使用上面例子中的模型
model.fit(data, labels, epochs=10, batch_size=32, validation_data=(val_data, val_labels))
Train on 1000 samples, validate on 100 samples
Epoch 1/10
1000/1000 [==============================] - 0s 214us/sample - loss: 23145.3992 - categorical_accuracy: 0.0900 - val_loss: 24439.5163 - val_categorical_accuracy: 0.0800
Epoch 2/10
1000/1000 [==============================] - 0s 67us/sample - loss: 26444.3933 - categorical_accuracy: 0.1040 - val_loss: 35398.1458 - val_categorical_accuracy: 0.0600
Epoch 3/10
1000/1000 [==============================] - 0s 88us/sample - loss: 30126.8239 - categorical_accuracy: 0.1000 - val_loss: 29529.0842 - val_categorical_accuracy: 0.1100
Epoch 4/10
1000/1000 [==============================] - 0s 65us/sample - loss: 34590.3639 - categorical_accuracy: 0.0910 - val_loss: 46674.4381 - val_categorical_accuracy: 0.1000
Epoch 5/10
1000/1000 [==============================] - 0s 69us/sample - loss: 40651.5867 - categorical_accuracy: 0.1040 - val_loss: 37209.6347 - val_categorical_accuracy: 0.1200
Epoch 6/10
1000/1000 [==============================] - 0s 64us/sample - loss: 45566.5513 - categorical_accuracy: 0.1020 - val_loss: 62251.0747 - val_categorical_accuracy: 0.0600
Epoch 7/10
1000/1000 [==============================] - 0s 67us/sample - loss: 48702.9324 - categorical_accuracy: 0.1100 - val_loss: 46097.0295 - val_categorical_accuracy: 0.0800
Epoch 8/10
1000/1000 [==============================] - 0s 79us/sample - loss: 57331.0441 - categorical_accuracy: 0.0880 - val_loss: 70678.8403 - val_categorical_accuracy: 0.0800
Epoch 9/10
1000/1000 [==============================] - 0s 68us/sample - loss: 61826.6894 - categorical_accuracy: 0.0990 - val_loss: 88854.2497 - val_categorical_accuracy: 0.1400
Epoch 10/10
1000/1000 [==============================] - 0s 78us/sample - loss: 67515.0722 - categorical_accuracy: 0.0980 - val_loss: 62506.4112 - val_categorical_accuracy: 0.1200





<tensorflow.python.keras.callbacks.History at 0x12c013f28>

使用tf.data进行训练

使用Datasets API,将tf.data.Dataset对象传递给fit函数,能够很好的扩展到大数据集以及多设备当中。

import tensorflow as tf

# 生成dataset对象
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)    # 设定批的大小

val_dataset = tf.data.Dataset.from_tensor_slices((val_data, val_labels))
val_dataset = val_dataset.batch(32)

model.fit(dataset, epochs=10, validation_data=val_dataset)    # 这里无需设定批的大小,因为上面已经设定过了
WARNING: Logging before flag parsing goes to stderr.
W0402 09:34:05.496211 4607419840 training_utils.py:1353] Expected a shuffled dataset but input dataset `x` is not shuffled. Please invoke `shuffle()` on input dataset.


Epoch 1/10
32/32 [==============================] - 0s 3ms/step - loss: 73289.8114 - categorical_accuracy: 0.1070 - val_loss: 98802.5176 - val_categorical_accuracy: 0.1100
Epoch 2/10
32/32 [==============================] - 0s 3ms/step - loss: 81560.8742 - categorical_accuracy: 0.0980 - val_loss: 84138.2148 - val_categorical_accuracy: 0.1000
Epoch 3/10
32/32 [==============================] - 0s 4ms/step - loss: 89748.1504 - categorical_accuracy: 0.1030 - val_loss: 131219.6230 - val_categorical_accuracy: 0.0800
Epoch 4/10
32/32 [==============================] - 0s 3ms/step - loss: 96199.4304 - categorical_accuracy: 0.1060 - val_loss: 77217.8711 - val_categorical_accuracy: 0.0600
Epoch 5/10
32/32 [==============================] - 0s 3ms/step - loss: 104713.3332 - categorical_accuracy: 0.0880 - val_loss: 109256.0977 - val_categorical_accuracy: 0.0800
Epoch 6/10
32/32 [==============================] - 0s 4ms/step - loss: 115306.2874 - categorical_accuracy: 0.0960 - val_loss: 99381.6211 - val_categorical_accuracy: 0.0800
Epoch 7/10
32/32 [==============================] - 0s 5ms/step - loss: 118495.8252 - categorical_accuracy: 0.0980 - val_loss: 113849.1523 - val_categorical_accuracy: 0.1100
Epoch 8/10
32/32 [==============================] - 0s 5ms/step - loss: 132190.9700 - categorical_accuracy: 0.0870 - val_loss: 129304.1992 - val_categorical_accuracy: 0.0600
Epoch 9/10
32/32 [==============================] - 0s 4ms/step - loss: 138033.7361 - categorical_accuracy: 0.1140 - val_loss: 114921.4238 - val_categorical_accuracy: 0.0800
Epoch 10/10
32/32 [==============================] - 0s 4ms/step - loss: 146065.1340 - categorical_accuracy: 0.1020 - val_loss: 113654.9219 - val_categorical_accuracy: 0.1000





<tensorflow.python.keras.callbacks.History at 0x12c013eb8>

验证和预测

使用tf.keras.Model.evaluatetf.keras.Model.predict方法可以使用NumPy数据或者tf.data.Dataset对象进行验证和预测
在使用tf.keras.Model.evaluate进行验证时,会输出compile方法设置的lossmetrics

model.evaluate(data, labels, batch_size=32)        # 对于numpy类型数据
model.evaluate(dataset, steps=30)                        # 对于tf.data.Dataset数据
1000/1000 [==============================] - 0s 43us/sample - loss: 111087.1599 - categorical_accuracy: 0.0970
30/30 [==============================] - 0s 2ms/step - loss: 111122.0201 - categorical_accuracy: 0.0979





[111122.02005208333, 0.09791667]

输出tf.keras.Model.predict方法会根据输入的数据,计算出网络的前向传播结果,并进行输出

result = model.predict(data, batch_size=32)
print(result.shape)
(1000, 10)

创建高级模型

功能API

使用tf.keras.Sequential可以快速堆叠出模型,但是这种方式只能创建单输入、单输出、顺序的简单网络。使用tf.keras提供了其他功能API可以创建下面的高级模型:

  • 多输入模型
  • 多输出模型
  • 拥有共享层的模型(比如同一层被调用了多次)
  • 非序列的模型(比如resnet)

使用这些功能API创建高级模型的流程如下:

  • 定义输入(可以是多个)
  • 逐层定义网络(每一层的对象需要可以使用一个tensor进行调用,并且返回一个tensor),最终获取输出
  • 使用tf.keras.Model将输入和输出关联起来,得到模型
  • 使用compilefit进行模型的设置和训练
inputs = keras.Input(shape=(32, ))        # 定义输入

# 逐层定义网络
x = keras.layers.Dense(64, activation='relu')(inputs)
x = keras.layers.Dense(64, activation='relu')(x)
out = keras.layers.Dense(10, activation='softmax')(x)

# 关联输入和输出
model = keras.Model(inputs=inputs, outputs=out)

# 设置模型并训练
model.compile(optimizer=keras.optimizers.Adam(0.01), 
                        loss='categorical_crossentropy', 
                        metrics=['accuracy'])
model.fit(data, labels, batch_size=32, epochs=5)
Epoch 1/5
1000/1000 [==============================] - 0s 206us/sample - loss: 31.9291 - accuracy: 0.0950
Epoch 2/5
1000/1000 [==============================] - 0s 67us/sample - loss: 278.3945 - accuracy: 0.1090
Epoch 3/5
1000/1000 [==============================] - 0s 82us/sample - loss: 1195.0157 - accuracy: 0.0890
Epoch 4/5
1000/1000 [==============================] - 0s 76us/sample - loss: 3960.7304 - accuracy: 0.0990
Epoch 5/5
1000/1000 [==============================] - 0s 74us/sample - loss: 5394.7903 - accuracy: 0.0920





<tensorflow.python.keras.callbacks.History at 0x12d670e10>

自定义模型

tf.keras提供了通过继承tf.keras.Model类来完全自定义自己的模型,通过实现__init__方法来完成模型的设置,以及实现call方法来完成前向传播(继承Model实现起来通常会比较复杂,如果可以还是尽量使用功能API)
下面是一个简单的继承Model的例子

class MyModel(keras.Model):
    def __init__(self, num_classes=10):
        super(MyModel, self).__init__(name='my_model')
        self.num_classes = num_classes
        # 完成网络的配置
        self.dense_1 = keras.layers.Dense(32, activation='relu')
        self.dense_2 = keras.layers.Dense(num_classes, activation='sigmoid')

    def call(self, inputs):
        # 进行前向传播
        x = self.dense_1(inputs)
        return self.dense_2(x)

model = MyModel()

model.compile(optimizer=keras.optimizers.Adam(0.01), 
                        loss='categorical_crossentropy', 
                        metrics=['accuracy'])

model.fit(data, labels, batch_size=32, epochs=5)
Epoch 1/5
1000/1000 [==============================] - 0s 233us/sample - loss: 11.5582 - accuracy: 0.0980
Epoch 2/5
1000/1000 [==============================] - 0s 79us/sample - loss: 11.5402 - accuracy: 0.1030
Epoch 3/5
1000/1000 [==============================] - 0s 75us/sample - loss: 11.5377 - accuracy: 0.1070
Epoch 4/5
1000/1000 [==============================] - 0s 72us/sample - loss: 11.5366 - accuracy: 0.1010
Epoch 5/5
1000/1000 [==============================] - 0s 62us/sample - loss: 11.5343 - accuracy: 0.1010





<tensorflow.python.keras.callbacks.History at 0x12d8b4860>

自定义层

通过继承tf.keras.layers.Layer并且实现下面几个方法,可以自定义所需要的层

  • __init__方法,可选,定义了该层需要的设置
  • build方法,该方法创建了该层的权重,通过add_weights方法创建
  • call方法,该方法实现了该层的前向过程
  • get_config可以实现模型配置的序列化,from_config可以实现反序列化
class MyLayer(keras.layers.Layer):
    def __init__(self, output_dim, **kwargs):
        self.output_dim = output_dim
        super(MyLayer, self).__init__(**kwargs)

    def build(self, input_shape):
        self.kernel = self.add_weight(name='kernel', 
                                       shape=(input_shape[1], self.output_dim), 
                                       initializer='uniform', trainable=True)

    def call(self, inputs):
        return tf.matmul(inputs, self.kernel)

    def get_config(self):
        base_config = super(MyLayer, self).get_config()
        base_config['output_dim'] = self.output_dim
        return base_config

    @classmethod
    def from_config(cls, config):
        return cls(**config)

model = keras.Sequential([
    MyLayer(10), 
    keras.layers.Activation('softmax')])

回调

回调可以在模型训练的过程中添加自定义的操作,你可以实现自己的回调或者使用tf.keras.callbacks内建的回调方法:

  • tf.keras.callbacks.ModelCheckpoint,以一定的间隔保存模型快照
  • tf.keras.callbacks.LearningRateScheduler,动态的修改学习率
  • tf.keras.callbacks.EarlyStopping,可以在验证的性能不再上升时停止训练
  • tf.keras.callbacks.TensorBoard,使用TensorBoard观测训练过程

如果想使用tf.keras.callbacks,可以将该方法作为参数传递给fit方法

callbacks = [
    tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'), 
    tf.keras.callbacks.TensorBoard(log_dir='./logs')]

model.compile(optimizer=keras.optimizers.Adam(0.01), 
                        loss='categorical_crossentropy', 
                        metrics=['accuracy'])

model.fit(data, labels, batch_size=32, epochs=10, callbacks=callbacks, validation_data=(val_data, val_labels))
Train on 1000 samples, validate on 100 samples
Epoch 1/10
1000/1000 [==============================] - 0s 410us/sample - loss: 11.5335 - accuracy: 0.1270 - val_loss: 11.6817 - val_accuracy: 0.1100
Epoch 2/10
1000/1000 [==============================] - 0s 74us/sample - loss: 11.5319 - accuracy: 0.1130 - val_loss: 11.6839 - val_accuracy: 0.0800
Epoch 3/10
1000/1000 [==============================] - 0s 70us/sample - loss: 11.5301 - accuracy: 0.1250 - val_loss: 11.6857 - val_accuracy: 0.0800





<tensorflow.python.keras.callbacks.History at 0x12db02c50>

模型的保存和载入

仅保存参数

使用tf.keras.Model.save_weightstf.keras.Model.load_weights可以完成对参数的保存和载入

inputs = keras.Input(shape=(32, ))        # 定义输入

# 逐层定义网络
x = keras.layers.Dense(64, activation='relu')(inputs)
x = keras.layers.Dense(64, activation='relu')(x)
out = keras.layers.Dense(10, activation='softmax')(x)

# 关联输入和输出
model = keras.Model(inputs=inputs, outputs=out)

# 保存参数
model.save_weights('./weights')

# 载入参数
model.load_weights('./weights')
<tensorflow.python.training.tracking.util.CheckpointLoadStatus at 0x12e902b70>

需要注意的是,默认情况下,该方法保存的文件格式是tensorflow CheckPoint,如果想保存为h5格式,那么需要指定文件后缀为.h5或指定save_format='h5'

model.save_weights('./weights.h5')        # 指定后缀为h5

model.save_weights('./weights', save_format='h5')    # 指定save_format参数

仅保存网络结构

使用tf.keras.Model.to_jsontf.keras.models.model_from_json可以将网络结构序列化为json字符串(以及反序列化),然后可以写入到文件当中
使用tf.keras.Model.to_yamltf.keras.models.model_from_yaml可以将网络结构序列化为yaml字符串(以及反序列化),然后可以写入到文件当中

# 模型序列化为字符串,然后就可以通过操作普通字符串的方式将其写入到文件
model_json = model.to_json()    # json
model_yaml = model.to_yaml()    # yaml

# 从字符串加载模型
model = keras.models.model_from_json(model_json)
model = keras.models.model_from_yaml(model_yaml)
/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/saving/model_config.py:76: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  config = yaml.load(yaml_string)

保存全部内容

使用tf.keras.Model.savetf.keras.models.load_model用于分别保存全部模型和加载全部模型(格式为h5)

model.save('./model.h5')        # 保存模型
model = keras.models.load_model('./model.h5')    # 加载模型
W0402 09:36:22.790299 4607419840 hdf5_format.py:224] No training configuration found in save file: the model was *not* compiled. Compile it manually.

eager执行器

eager执行器提供了立即执行tf操作的功能,在该模式下,可以像使用普通python代码一样使用tf的操作,无需Session的支持。这并不是Keras的一部分,但是在tf.keras中已经得到支持(在tf 2.0中默认支持),eager模式可以大大简化编程和调试的难度
所有tf.keras操作都已经支持eager模式,甚至对于像模型继承、自定义层这些需要自己实现前向传播的操作都已经实现了对eager模式的支持

分布式

GPU

使用tf.distribute.Strategytf.keras可以在无需修改代码的情况下运行在多GPU上

  • 5
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值