Keras 顺序模型 && 函数式模型

          keras有两种模型结构,一种顺序模型,一种函数式模型,模型的绘制可使用plot_model(model, show_shapes=True, to_file='../model.png'),依赖于from tensorflow.keras.utils import plot_model。

顺序模型

一、顺序模型Basic

        Keras Sequential顺序模型是多个网络层的线性堆叠。你可以通过将网络层实例的列表传递给 Sequential 的构造器。以下用为mnist构建的经典网络案例。顺序模型中的第一层(且只有第一层,后面的层可以自动推断输入)需要接收关于输入尺寸的信息,通常是传递一个input_shape;若是Dense层也可以支持参数input_dim指定输入尺寸。

from tensorflow import keras
model = keras.models.Sequential([
    keras.layers.Dense(32, input_shape=(784, )),
    keras.layers.Activation('relu'),
    keras.layers.Dense(10),
    keras.layers.Activation('softmax'),
])
model.summary()

       以上案例由两个全连接层的网络组成

       1)第一个全连接层由32个神经元组成。在第一个网络层中需要加入输入数据的大小,由于mnist数据集是28*28=784的数组,故input_shape的大小为(784,)。第一个全连接层总共参数分为权重w:32*784=25088(个),偏置b:32*1=32(个),故整体的参数为25120(个)。第一层的32个神经元最后的输出要经过激励函数relu到达第二层。

       2)第二个全连接层由10个神经元组成。输入是来自经过激励函数‘relu’后的第一层的32个神经元,第二个全连接层总共参数分为权重w:10*32=320(个),偏置b:10*1=10(个),故整体的参数为330(个)。第二层的10个神经元最后的输出要经过激励函数softmax到达输出层。

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 32)                25120     
_________________________________________________________________
activation (Activation)      (None, 32)                0         
_________________________________________________________________
dense_1 (Dense)              (None, 10)                330       
_________________________________________________________________
activation_1 (Activation)    (None, 10)                0         
=================================================================
Total params: 25,450
Trainable params: 25,450
Non-trainable params: 0
_________________________________________________________________

        上述例子Sequential Model网络层也可以用add的方式进行简单叠加。

from tensorflow import keras
model2 = keras.models.Sequential()
model2.add(keras.layers.Dense(32, input_dim=784))
model2.add(keras.layers.Activation('relu'))
model2.add(keras.layers.Dense(10))
model2.add(keras.layers.Activation('softmax'))
model2.summary()

# 也可以表示成
from tensorflow import keras
model2 = keras.models.Sequential()
model2.add(keras.layers.Dense(32, activation='relu', input_dim=784))
model2.add(keras.layers.Dense(10, activation='softmax'))
model2.summary()

 

1、Mnist例子

import tensorflow as tf
from tensorflow import keras

inputs = keras.Input(shape=(784,), name='img')
x = keras.layers.Dense(64, activation='relu')(inputs)
x = keras.layers.Dense(64, activation='relu')(x)
outputs = keras.layers.Dense(10)(x)

model = keras.Model(inputs=inputs, outputs=outputs, name='mnist_model')
model.summary()

(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') / 255
x_test = x_test.reshape(10000, 784).astype('float32') / 255

model.compile(loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              optimizer=keras.optimizers.RMSprop(),
              metrics=['accuracy'])
history = model.fit(x_train, y_train,
                    batch_size=64,
                    epochs=5,
                    validation_split=0.2)
test_scores = model.evaluate(x_test, y_test, verbose=2)
print('Test loss:', test_scores[0])
print('Test accuracy:', test_scores[1])

2、keras.Model线性拟合问题

        为了拟合一个y = kx + b曲线

import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
import matplotlib.pyplot as plt

# Create some data
X = np.linspace(-1, 1, 200)
np.random.shuffle(X)
Y = 0.5 * X + 2 + np.random.normal(0, 0.05, (200,))

# plot data
plt.scatter(X, Y)
plt.show()

X_train, Y_train = X[:160], Y[:160]
X_test, Y_test = X[160:], Y[160:]

# build model
model = Sequential()
model.add(Dense(1))

# choose loss function and optimizing method
model.compile(loss='mse', optimizer='sgd')

# training
print('Training-------------')
for step in range(1001):
    cost = model.train_on_batch(X_train, Y_train)
    if step % 200 == 0:
        print('Train const: ', cost)

# test
print('\nTesting------------')
cost = model.evaluate(X_test, Y_test, batch_size=40)
print('test cost:', cost)
W, b = model.layers[0].get_weights()
print('Weights= ', W, '\nbiases= ',b)

# plotting the prediction
Y_pred = model.predict(X_test)
plt.scatter(X_test, Y_test)
plt.plot(X_test, Y_pred)
plt.show()
 
 

二、顺序模型的案例

1)基于多层感知器 (MLP) 的 softmax 多分类

import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.optimizers import SGD

# 生成虚拟数据
import numpy as np
x_train = np.random.random((1000, 20))
y_train = keras.utils.to_categorical(np.random.randint(10, size=(1000, 1)), num_classes=10)
x_test = np.random.random((100, 20))
y_test = keras.utils.to_categorical(np.random.randint(10, size=(100, 1)), num_classes=10)

model = Sequential()
# Dense(64) 是一个具有 64 个隐藏神经元的全连接层。
# 在第一层必须指定所期望的输入数据尺寸:
# 在这里,是一个 20 维的向量。
model.add(Dense(64, activation='relu', input_dim=20))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))

sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy',
              optimizer=sgd,
              metrics=['accuracy'])

model.fit(x_train, y_train,
          epochs=20,
          batch_size=128)
score = model.evaluate(x_test, y_test, batch_size=128)

2)基于多层感知器的二分类

import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Dropout

# 生成虚拟数据
x_train = np.random.random((1000, 20))
y_train = np.random.randint(2, size=(1000, 1))
x_test = np.random.random((100, 20))
y_test = np.random.randint(2, size=(100, 1))

model = Sequential()
model.add(Dense(64, input_dim=20, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))

model.compile(loss='binary_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])

model.fit(x_train, y_train,
          epochs=20,
          batch_size=128)
score = model.evaluate(x_test, y_test, batch_size=128)

3)类似 VGG 的卷积神经网络

import tensorflow as tf
from tensorflow import keras

batch_size = 128
num_classes = 10
epochs = 12

# 输入图像尺寸
img_rows, img_cols = 28, 28

# 数据,分为训练集和测试集
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()

if keras.backend.image_data_format() == 'channels_first':
    x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
    x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
    input_shape = (1, img_rows, img_cols)
else:
    x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
    x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
    input_shape = (img_rows, img_cols, 1)

x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

# 将类向量转换为二进制类矩阵
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)

model = keras.models.Sequential()
model.add(keras.layers.Conv2D(32, kernel_size=(3, 3),
                 activation='relu',
                 input_shape=input_shape))
model.add(keras.layers.Conv2D(64, (3, 3), activation='relu'))
model.add(keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(keras.layers.Dropout(0.25))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(128, activation='relu'))
model.add(keras.layers.Dropout(0.5))
model.add(keras.layers.Dense(num_classes, activation='softmax'))
model.summary()

model.compile(loss=keras.losses.categorical_crossentropy,
              optimizer=keras.optimizers.Adadelta(),
              metrics=['accuracy'])

model.fit(x_train, y_train,
          batch_size=batch_size,
          epochs=epochs,
          verbose=1,
          validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])

模型分析      

       1)第一层卷积层:这里首次使用了卷积神经网络keras.layers.Conv2D(32, kernel_size=(3, 3), activation='relu'),这一层用32个3*3的卷积核去卷积图像,由于use_bias默认为True,这一层的参数总共是32*(3*3+1)=320个。这一层的输出是26*26*32(由于padding的默认值为'valid',如果padding的值为‘same’的话输出为28*28*32)。

        2)第二层卷积层:通过第一层卷积后的结果为28*28*32的featuremap组,与经典VGG16不同(VGG16在卷积后跟着池化层),这里继续用64个3*3的卷积核去卷积model.add(keras.layers.Conv2D(64, (3, 3), activation='relu')),即64个3*3*32(32代表维度)的卷积核去卷积,参数总共是64*(3*3*32+1)=18496个。这一层输出的值为24*24*64。

        3)第三层池化层:不涉及参数,由于pool_size=(2, 2),输出值为12*12*64。

        4)第四层全连接层:keras.layers.Dense(128, activation='relu'),由128个神经元与输入的12*12*64实现全连接,故这一层的参数为128*(12*12*64+1)=1179776个。

        5)第五层全连接层:keras.layers.Dense(num_classes, activation='softmax'),由10个神经元与上一层的128个神经元全连接,这一层的参数为10*(128+1)=1290个。

 

 

函数式模型

函数式模型举例

def digitx_zhongshiyou_model(input_shape):
    '''
       a model to classify printed 0-9 of chengfei
       Arguments:
           input_shape -- shape of the input images

       Returns:
           model -- a Model() instance in Keras
       '''

    X_input = Input(input_shape, name='input')

    X = Conv2D(16, (5, 5), strides=(1, 1), padding='same', name='conv0')(X_input)
    X = BatchNormalization(axis=3, name='bn0')(X)
    X = Activation('relu')(X)
    X = MaxPooling2D((2, 2), name='max_pool0')(X)

    X = Conv2D(32, (3, 3), strides=(1, 1), padding='same', name='conv1')(X)
    X = BatchNormalization(axis=3, name='bn1')(X)
    X = Activation('relu')(X)
    X = MaxPooling2D((2, 2), name='max_pool1')(X)

    X = Conv2D(64, (3, 3), strides=(1, 1), padding='same', name='conv2')(X)
    X = BatchNormalization(axis=3, name='bn2')(X)
    X = Activation('relu')(X)
    X = MaxPooling2D((2, 2), name='max_pool2')(X)

    X = Flatten()(X)
    # X = BatchNormalization(axis=1, name='bn3')(X)
    X = Dense(128, activation='relu', name='after_flatten')(X)
    X = Dense(64, activation='relu', name='after_flatten2')(X)
    X = Dense(32, activation='softmax', name='softmax_out', kernel_initializer=glorot_uniform())(X)

    model = Model(inputs=X_input, outputs=X, name='digitx_model')

    # 用于 Keras绘制网络模型
    draw_model = False
    if draw_model == True:
        plot_model(model, show_shapes=True, to_file='../model.png')

    return model

 

 

 

 

 

 

参考文章:

【1】https://keras.io/zh/getting-started/sequential-model-guide/

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值