深度残差网络+自适应参数化ReLU激活函数(调参记录19)Cifar10~93.96%

由于调参记录18依然存在过拟合,本文将自适应参数化ReLU激活函数中最后一层的神经元个数减少为1个,继续测试深度残差网络+自适应参数化ReLU激活函数在Cifar10数据集上的效果。

同时,迭代次数从调参记录18中的5000个epoch,减少到了500个epoch,因为5000次实在是太费时间了,差不多要四天才能跑完。

自适应参数化ReLU激活函数的基本原理如下:
在这里插入图片描述
Keras程序如下:

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Tue Apr 14 04:17:45 2020
Implemented using TensorFlow 1.10.0 and Keras 2.2.1

Minghang Zhao, Shisheng Zhong, Xuyun Fu, Baoping Tang, Shaojiang Dong, Michael Pecht,
Deep Residual Networks with Adaptively Parametric Rectifier Linear Units for Fault Diagnosis, 
IEEE Transactions on Industrial Electronics, 2020,  DOI: 10.1109/TIE.2020.2972458 

@author: Minghang Zhao
"""

from __future__ import print_function
import keras
import numpy as np
from keras.datasets import cifar10
from keras.layers import Dense, Conv2D, BatchNormalization, Activation, Minimum
from keras.layers import AveragePooling2D, Input, GlobalAveragePooling2D, Concatenate, Reshape
from keras.regularizers import l2
from keras import backend as K
from keras.models import Model
from keras import optimizers
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import LearningRateScheduler
K.set_learning_phase(1)

# The data, split between train and test sets
(x_train, y_train), (x_test, y_test) = cifar10.load_data()

# Noised data
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_test = x_test-np.mean(x_train)
x_train = x_train-np.mean(x_train)
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)

# Schedule the learning rate, multiply 0.1 every 150 epoches
def scheduler(epoch):
    if epoch % 150 == 0 and epoch != 0:
        lr = K.get_value(model.optimizer.lr)
        K.set_value(model.optimizer.lr, lr * 0.1)
        print("lr changed to {}".format(lr * 0.1))
    return K.get_value(model.optimizer.lr)

# An adaptively parametric rectifier linear unit (APReLU)
def aprelu(inputs):
    # get the number of channels
    channels = inputs.get_shape().as_list()[-1]
    # get a zero feature map
    zeros_input = keras.layers.subtract([inputs, inputs])
    # get a feature map with only positive features
    pos_input = Activation('relu')(inputs)
    # get a feature map with only negative features
    neg_input = Minimum()([inputs,zeros_input])
    # define a network to obtain the scaling coefficients
    scales_p = GlobalAveragePooling2D()(pos_input)
    scales_n = GlobalAveragePooling2D()(neg_input)
    scales = Concatenate()([scales_n, scales_p])
    scales = Dense(channels//16, activation='linear', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(scales)
    scales = BatchNormalization(momentum=0.9, gamma_regularizer=l2(1e-4))(scales)
    scales = Activation('relu')(scales)
    scales = Dense(1, activation='linear', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(scales)
    scales = BatchNormalization(momentum=0.9, gamma_regularizer=l2(1e-4))(scales)
    scales = Activation('sigmoid')(scales)
    scales = Reshape((1,1,1))(scales)
    # apply a paramtetric relu
    neg_part = keras.layers.multiply([scales, neg_input])
    return keras.layers.add([pos_input, neg_part])

# Residual Block
def residual_block(incoming, nb_blocks, out_channels, downsample=False,
                   downsample_strides=2):
    
    residual = incoming
    in_channels = incoming.get_shape().as_list()[-1]
    
    for i in range(nb_blocks):
        
        identity = residual
        
        if not downsample:
            downsample_strides = 1
        
        residual = BatchNormalization(momentum=0.9, gamma_regularizer=l2(1e-4))(residual)
        residual = aprelu(residual)
        residual = Conv2D(out_channels, 3, strides=(downsample_strides, downsample_strides), 
                          padding='same', kernel_initializer='he_normal', 
                          kernel_regularizer=l2(1e-4))(residual)
        
        residual = BatchNormalization(momentum=0.9, gamma_regularizer=l2(1e-4))(residual)
        residual = aprelu(residual)
        residual = Conv2D(out_channels, 3, padding='same', kernel_initializer='he_normal', 
                          kernel_regularizer=l2(1e-4))(residual)
        
        # Downsampling
        if downsample_strides > 1:
            identity = AveragePooling2D(pool_size=(1,1), strides=(2,2))(identity)
            
        # Zero_padding to match channels
        if in_channels != out_channels:
            zeros_identity = keras.layers.subtract([identity, identity])
            identity = keras.layers.concatenate([identity, zeros_identity])
            in_channels = out_channels
        
        residual = keras.layers.add([residual, identity])
    
    return residual


# define and train a model
inputs = Input(shape=(32, 32, 3))
net = Conv2D(16, 3, padding='same', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(inputs)
net = residual_block(net, 9, 32, downsample=False)
net = residual_block(net, 1, 32, downsample=True)
net = residual_block(net, 8, 32, downsample=False)
net = residual_block(net, 1, 64, downsample=True)
net = residual_block(net, 8, 64, downsample=False)
net = BatchNormalization(momentum=0.9, gamma_regularizer=l2(1e-4))(net)
net = aprelu(net)
net = GlobalAveragePooling2D()(net)
outputs = Dense(10, activation='softmax', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(net)
model = Model(inputs=inputs, outputs=outputs)
sgd = optimizers.SGD(lr=0.1, decay=0., momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])

# data augmentation
datagen = ImageDataGenerator(
    # randomly rotate images in the range (deg 0 to 180)
    rotation_range=30,
    # Range for random zoom
    zoom_range = 0.2,
    # shear angle in counter-clockwise direction in degrees
    shear_range = 30,
    # randomly flip images
    horizontal_flip=True,
    # randomly shift images horizontally
    width_shift_range=0.125,
    # randomly shift images vertically
    height_shift_range=0.125)

reduce_lr = LearningRateScheduler(scheduler)
# fit the model on the batches generated by datagen.flow().
model.fit_generator(datagen.flow(x_train, y_train, batch_size=100),
                    validation_data=(x_test, y_test), epochs=500, 
                    verbose=1, callbacks=[reduce_lr], workers=4)

# get results
K.set_learning_phase(0)
DRSN_train_score = model.evaluate(x_train, y_train, batch_size=100, verbose=0)
print('Train loss:', DRSN_train_score[0])
print('Train accuracy:', DRSN_train_score[1])
DRSN_test_score = model.evaluate(x_test, y_test, batch_size=100, verbose=0)
print('Test loss:', DRSN_test_score[0])
print('Test accuracy:', DRSN_test_score[1])

实验结果如下:

Using TensorFlow backend.
x_train shape: (50000, 32, 32, 3)
50000 train samples
10000 test samples
Epoch 1/500
107s 215ms/step - loss: 2.3702 - acc: 0.3922 - val_loss: 1.9601 - val_acc: 0.5235
Epoch 2/500
77s 154ms/step - loss: 1.9532 - acc: 0.5157 - val_loss: 1.6734 - val_acc: 0.5998
Epoch 3/500
77s 154ms/step - loss: 1.6989 - acc: 0.5797 - val_loss: 1.4728 - val_acc: 0.6495
Epoch 4/500
77s 154ms/step - loss: 1.5366 - acc: 0.6184 - val_loss: 1.3253 - val_acc: 0.6888
Epoch 5/500
77s 154ms/step - loss: 1.4110 - acc: 0.6444 - val_loss: 1.2022 - val_acc: 0.7197
Epoch 6/500
77s 154ms/step - loss: 1.3059 - acc: 0.6707 - val_loss: 1.1398 - val_acc: 0.7236
Epoch 7/500
77s 154ms/step - loss: 1.2295 - acc: 0.6873 - val_loss: 1.0509 - val_acc: 0.7515
Epoch 8/500
77s 154ms/step - loss: 1.1568 - acc: 0.7041 - val_loss: 0.9907 - val_acc: 0.7686
Epoch 9/500
77s 154ms/step - loss: 1.1016 - acc: 0.7207 - val_loss: 0.9470 - val_acc: 0.7863
Epoch 10/500
77s 154ms/step - loss: 1.0521 - acc: 0.7346 - val_loss: 0.9005 - val_acc: 0.7911
Epoch 11/500
77s 154ms/step - loss: 1.0246 - acc: 0.7423 - val_loss: 0.8991 - val_acc: 0.7881
Epoch 12/500
77s 154ms/step - loss: 0.9941 - acc: 0.7506 - val_loss: 0.8390 - val_acc: 0.8093
Epoch 13/500
77s 154ms/step - loss: 0.9642 - acc: 0.7602 - val_loss: 0.8239 - val_acc: 0.8147
Epoch 14/500
77s 154ms/step - loss: 0.9465 - acc: 0.7652 - val_loss: 0.8057 - val_acc: 0.8170
Epoch 15/500
77s 154ms/step - loss: 0.9296 - acc: 0.7701 - val_loss: 0.8180 - val_acc: 0.8114
Epoch 16/500
77s 154ms/step - loss: 0.9103 - acc: 0.7767 - val_loss: 0.7975 - val_acc: 0.8207
Epoch 17/500
77s 154ms/step - loss: 0.9027 - acc: 0.7801 - val_loss: 0.8048 - val_acc: 0.8186
Epoch 18/500
77s 154ms/step - loss: 0.8904 - acc: 0.7848 - val_loss: 0.7542 - val_acc: 0.8376
Epoch 19/500
77s 154ms/step - loss: 0.8765 - acc: 0.7889 - val_loss: 0.7633 - val_acc: 0.8313
Epoch 20/500
77s 154ms/step - loss: 0.8739 - acc: 0.7913 - val_loss: 0.7411 - val_acc: 0.8432
Epoch 21/500
77s 154ms/step - loss: 0.8587 - acc: 0.7976 - val_loss: 0.7357 - val_acc: 0.8466
Epoch 22/500
77s 154ms/step - loss: 0.8505 - acc: 0.7982 - val_loss: 0.7369 - val_acc: 0.8437
Epoch 23/500
77s 154ms/step - loss: 0.8495 - acc: 0.8014 - val_loss: 0.7507 - val_acc: 0.8415
Epoch 24/500
77s 154ms/step - loss: 0.8382 - acc: 0.8070 - val_loss: 0.7494 - val_acc: 0.8423
Epoch 25/500
77s 154ms/step - loss: 0.8339 - acc: 0.8097 - val_loss: 0.7374 - val_acc: 0.8441
Epoch 26/500
77s 154ms/step - loss: 0.8284 - acc: 0.8105 - val_loss: 0.7195 - val_acc: 0.8517
Epoch 27/500
77s 154ms/step - loss: 0.8244 - acc: 0.8139 - val_loss: 0.7054 - val_acc: 0.8611
Epoch 28/500
77s 154ms/step - loss: 0.8242 - acc: 0.8143 - val_loss: 0.6997 - val_acc: 0.8614
Epoch 29/500
77s 154ms/step - loss: 0.8145 - acc: 0.8186 - val_loss: 0.6966 - val_acc: 0.8598
Epoch 30/500
77s 154ms/step - loss: 0.8092 - acc: 0.8197 - val_loss: 0.7344 - val_acc: 0.8498
Epoch 31/500
77s 154ms/step - loss: 0.8048 - acc: 0.8219 - val_loss: 0.7232 - val_acc: 0.8574
Epoch 32/500
77s 154ms/step - loss: 0.8054 - acc: 0.8244 - val_loss: 0.6888 - val_acc: 0.8652
Epoch 33/500
77s 154ms/step - loss: 0.8000 - acc: 0.8231 - val_loss: 0.7236 - val_acc: 0.8533
Epoch 34/500
77s 154ms/step - loss: 0.7994 - acc: 0.8258 - val_loss: 0.7096 - val_acc: 0.8584
Epoch 35/500
77s 154ms/step - loss: 0.7933 - acc: 0.8291 - val_loss: 0.7063 - val_acc: 0.8602
Epoch 36/500
77s 154ms/step - loss: 0.7955 - acc: 0.8275 - val_loss: 0.7124 - val_acc: 0.8599
Epoch 37/500
77s 154ms/step - loss: 0.7961 - acc: 0.8280 - val_loss: 0.7020 - val_acc: 0.8650
Epoch 38/500
77s 154ms/step - loss: 0.7864 - acc: 0.8332 - val_loss: 0.7201 - val_acc: 0.8573
Epoch 39/500
77s 154ms/step - loss: 0.7949 - acc: 0.8303 - val_loss: 0.7009 - val_acc: 0.8648
Epoch 40/500
77s 154ms/step - loss: 0.7781 - acc: 0.8349 - val_loss: 0.6954 - val_acc: 0.8636
Epoch 41/500
77s 154ms/step - loss: 0.7821 - acc: 0.8352 - val_loss: 0.6819 - val_acc: 0.8736
Epoch 42/500
77s 154ms/step - loss: 0.7805 - acc: 0.8345 - val_loss: 0.7347 - val_acc: 0.8550
Epoch 43/500
77s 154ms/step - loss: 0.7749 - acc: 0.8384 - val_loss: 0.7029 - val_acc: 0.8642
Epoch 44/500
77s 154ms/step - loss: 0.7777 - acc: 0.8368 - val_loss: 0.6967 - val_acc: 0.8676
Epoch 45/500
77s 154ms/step - loss: 0.7725 - acc: 0.8393 - val_loss: 0.6867 - val_acc: 0.8722
Epoch 46/500
77s 154ms/step - loss: 0.7737 - acc: 0.8408 - val_loss: 0.7075 - val_acc: 0.8644
Epoch 47/500
77s 154ms/step - loss: 0.7734 - acc: 0.8395 - val_loss: 0.6958 - val_acc: 0.8667
Epoch 48/500
77s 154ms/step - loss: 0.7750 - acc: 0.8404 - val_loss: 0.6956 - val_acc: 0.8701
Epoch 49/500
77s 154ms/step - loss: 0.7691 - acc: 0.8417 - val_loss: 0.6977 - val_acc: 0.8677
Epoch 50/500
77s 154ms/step - loss: 0.7661 - acc: 0.8433 - val_loss: 0.7094 - val_acc: 0.8683
Epoch 51/500
77s 154ms/step - loss: 0.7638 - acc: 0.8469 - val_loss: 0.6972 - val_acc: 0.8678
Epoch 52/500
77s 154ms/step - loss: 0.7613 - acc: 0.8455 - val_loss: 0.7113 - val_acc: 0.8676
Epoch 53/500
77s 154ms/step - loss: 0.7647 - acc: 0.8460 - val_loss: 0.6946 - val_acc: 0.8692
Epoch 54/500
77s 154ms/step - loss: 0.7572 - acc: 0.8468 - val_loss: 0.7242 - val_acc: 0.8628
Epoch 55/500
77s 154ms/step - loss: 0.7560 - acc: 0.8504 - val_loss: 0.7084 - val_acc:
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值