ResNet+自适应参数化ReLU(调参记录25)Cifar10~95.77%

在之前调参记录的基础上,首先,大幅度削减了自适应参数化ReLU中全连接神经元的个数,想着可以减轻训练的难度,也可以减少过拟合;然后,将Epoch增加到1000个,继续测试ResNet+自适应参数化ReLU激活函数在Cifar10上的效果。

自适应参数化ReLU激活函数的基本原理如下:
在这里插入图片描述
Keras程序:

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Tue Apr 14 04:17:45 2020
Implemented using TensorFlow 1.10.0 and Keras 2.2.1

Minghang Zhao, Shisheng Zhong, Xuyun Fu, Baoping Tang, Shaojiang Dong, Michael Pecht,
Deep Residual Networks with Adaptively Parametric Rectifier Linear Units for Fault Diagnosis, 
IEEE Transactions on Industrial Electronics, DOI: 10.1109/TIE.2020.2972458,
Date of Publication: 13 February 2020

@author: Minghang Zhao
"""

from __future__ import print_function
import keras
import numpy as np
from keras.datasets import cifar10
from keras.layers import Dense, Conv2D, BatchNormalization, Activation, Minimum, Lambda
from keras.layers import AveragePooling2D, Input, GlobalAveragePooling2D, Concatenate, Reshape
from keras.regularizers import l2
from keras import backend as K
from keras.models import Model
from keras import optimizers
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import LearningRateScheduler
K.set_learning_phase(1)

def cal_mean(inputs):
    outputs = K.mean(inputs, axis=1, keepdims=True)
    return outputs

# The data, split between train and test sets
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_test = x_test-np.mean(x_train)
x_train = x_train-np.mean(x_train)
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)

# Schedule the learning rate, multiply 0.1 every 300 epoches
def scheduler(epoch):
    if epoch % 300 == 0 and epoch != 0:
        lr = K.get_value(model.optimizer.lr)
        K.set_value(model.optimizer.lr, lr * 0.1)
        print("lr changed to {}".format(lr * 0.1))
    return K.get_value(model.optimizer.lr)

# An adaptively parametric rectifier linear unit (APReLU)
def aprelu(inputs):
    # get the number of channels
    channels = inputs.get_shape().as_list()[-1]
    # get a zero feature map
    zeros_input = keras.layers.subtract([inputs, inputs])
    # get a feature map with only positive features
    pos_input = Activation('relu')(inputs)
    # get a feature map with only negative features
    neg_input = Minimum()([inputs,zeros_input])
    # define a network to obtain the scaling coefficients
    scales_p = Lambda(cal_mean)(GlobalAveragePooling2D()(pos_input))
    scales_n = Lambda(cal_mean)(GlobalAveragePooling2D()(neg_input))
    scales = Concatenate()([scales_n, scales_p])
    scales = Dense(2, activation='linear', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(scales)
    scales = BatchNormalization(momentum=0.9, gamma_regularizer=l2(1e-4))(scales)
    scales = Activation('relu')(scales)
    scales = Dense(1, activation='linear', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(scales)
    scales = BatchNormalization(momentum=0.9, gamma_regularizer=l2(1e-4))(scales)
    scales = Activation('sigmoid')(scales)
    scales = Reshape((1,1,1))(scales)
    # apply a paramtetric relu
    neg_part = keras.layers.multiply([scales, neg_input])
    return keras.layers.add([pos_input, neg_part])

# Residual Block
def residual_block(incoming, nb_blocks, out_channels, downsample=False,
                   downsample_strides=2):
    
    residual = incoming
    in_channels = incoming.get_shape().as_list()[-1]
    
    for i in range(nb_blocks):
        
        identity = residual
        
        if not downsample:
            downsample_strides = 1
        
        residual = BatchNormalization(momentum=0.9, gamma_regularizer=l2(1e-4))(residual)
        residual = aprelu(residual)
        residual = Conv2D(out_channels, 3, strides=(downsample_strides, downsample_strides), 
                          padding='same', kernel_initializer='he_normal', 
                          kernel_regularizer=l2(1e-4))(residual)
        
        residual = BatchNormalization(momentum=0.9, gamma_regularizer=l2(1e-4))(residual)
        residual = aprelu(residual)
        residual = Conv2D(out_channels, 3, padding='same', kernel_initializer='he_normal', 
                          kernel_regularizer=l2(1e-4))(residual)
        
        # Downsampling
        if downsample_strides > 1:
            identity = AveragePooling2D(pool_size=(1,1), strides=(2,2))(identity)
            
        # Zero_padding to match channels
        if in_channels != out_channels:
            zeros_identity = keras.layers.subtract([identity, identity])
            identity = keras.layers.concatenate([identity, zeros_identity])
            in_channels = out_channels
        
        residual = keras.layers.add([residual, identity])
    
    return residual


# define and train a model
inputs = Input(shape=(32, 32, 3))
net = Conv2D(64, 3, padding='same', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(inputs)
net = residual_block(net, 20,  64, downsample=False)
net = residual_block(net, 1,  128, downsample=True)
net = residual_block(net, 19, 128, downsample=False)
net = residual_block(net, 1,  256, downsample=True)
net = residual_block(net, 19, 256, downsample=False)
net = BatchNormalization(momentum=0.9, gamma_regularizer=l2(1e-4))(net)
net = aprelu(net)
net = GlobalAveragePooling2D()(net)
outputs = Dense(10, activation='softmax', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(net)
model = Model(inputs=inputs, outputs=outputs)
sgd = optimizers.SGD(lr=0.1, decay=0., momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])

# data augmentation
datagen = ImageDataGenerator(
    # randomly rotate images in the range (deg 0 to 180)
    rotation_range=30,
    # Range for random zoom
    zoom_range = 0.2,
    # shear angle in counter-clockwise direction in degrees
    shear_range = 30,
    # randomly flip images
    horizontal_flip=True,
    # randomly shift images horizontally
    width_shift_range=0.125,
    # randomly shift images vertically
    height_shift_range=0.125)

reduce_lr = LearningRateScheduler(scheduler)
# fit the model on the batches generated by datagen.flow().
model.fit_generator(datagen.flow(x_train, y_train, batch_size=100),
                    validation_data=(x_test, y_test), epochs=1000, 
                    verbose=1, callbacks=[reduce_lr], workers=4)

# get results
K.set_learning_phase(0)
DRSN_train_score = model.evaluate(x_train, y_train, batch_size=100, verbose=0)
print('Train loss:', DRSN_train_score[0])
print('Train accuracy:', DRSN_train_score[1])
DRSN_test_score = model.evaluate(x_test, y_test, batch_size=100, verbose=0)
print('Test loss:', DRSN_test_score[0])
print('Test accuracy:', DRSN_test_score[1])

实验结果:

Using TensorFlow backend.
x_train shape: (50000, 32, 32, 3)
50000 train samples
10000 test samples
Epoch 1/1000
413s 826ms/step - loss: 6.7483 - acc: 0.3694 - val_loss: 5.9301 - val_acc: 0.4997
Epoch 2/1000
330s 659ms/step - loss: 5.5312 - acc: 0.4941 - val_loss: 4.9301 - val_acc: 0.5792
Epoch 3/1000
330s 659ms/step - loss: 4.6439 - acc: 0.5593 - val_loss: 4.0912 - val_acc: 0.6487
Epoch 4/1000
329s 659ms/step - loss: 3.9263 - acc: 0.6101 - val_loss: 3.4891 - val_acc: 0.6778
Epoch 5/1000
329s 659ms/step - loss: 3.3595 - acc: 0.6415 - val_loss: 2.9574 - val_acc: 0.7220
Epoch 6/1000
334s 668ms/step - loss: 2.8985 - acc: 0.6704 - val_loss: 2.5622 - val_acc: 0.7322
Epoch 7/1000
330s 660ms/step - loss: 2.5229 - acc: 0.6917 - val_loss: 2.2063 - val_acc: 0.7612
Epoch 8/1000
329s 659ms/step - loss: 2.2063 - acc: 0.7167 - val_loss: 1.9074 - val_acc: 0.7861
Epoch 9/1000
329s 659ms/step - loss: 1.9480 - acc: 0.7339 - val_loss: 1.6790 - val_acc: 0.8023
Epoch 10/1000
330s 659ms/step - loss: 1.7331 - acc: 0.7527 - val_loss: 1.5017 - val_acc: 0.8113
Epoch 11/1000
330s 659ms/step - loss: 1.5656 - acc: 0.7652 - val_loss: 1.3648 - val_acc: 0.8174
Epoch 12/1000
330s 659ms/step - loss: 1.4253 - acc: 0.7756 - val_loss: 1.2259 - val_acc: 0.8327
Epoch 13/1000
329s 659ms/step - loss: 1.3032 - acc: 0.7877 - val_loss: 1.1070 - val_acc: 0.8442
Epoch 14/1000
330s 659ms/step - loss: 1.2136 - acc: 0.7941 - val_loss: 1.0533 - val_acc: 0.8408
Epoch 15/1000
329s 659ms/step - loss: 1.1363 - acc: 0.8009 - val_loss: 0.9825 - val_acc: 0.8517
Epoch 16/1000
331s 661ms/step - loss: 1.0679 - acc: 0.8111 - val_loss: 0.9233 - val_acc: 0.8563
Epoch 17/1000
330s 660ms/step - loss: 1.0143 - acc: 0.8168 - val_loss: 0.8913 - val_acc: 0.8569
Epoch 18/1000
330s 660ms/step - loss: 0.9731 - acc: 0.8215 - val_loss: 0.8578 - val_acc: 0.8561
Epoch 19/1000
330s 660ms/step - loss: 0.9397 - acc: 0.8261 - val_loss: 0.8226 - val_acc: 0.8675
Epoch 20/1000
330s 659ms/step - loss: 0.9073 - acc: 0.8301 - val_loss: 0.7973 - val_acc: 0.8697
Epoch 21/1000
329s 659ms/step - loss: 0.8823 - acc: 0.8342 - val_loss: 0.7837 - val_acc: 0.8693
Epoch 22/1000
331s 662ms/step - loss: 0.8688 - acc: 0.8369 - val_loss: 0.7693 - val_acc: 0.8736
Epoch 23/1000
332s 665ms/step - loss: 0.8479 - acc: 0.8425 - val_loss: 0.7626 - val_acc: 0.8730
Epoch 24/1000
329s 659ms/step - loss: 0.8329 - acc: 0.8453 - val_loss: 0.7745 - val_acc: 0.8671
Epoch 25/1000
330s 660ms/step - loss: 0.8271 - acc: 0.8462 - val_loss: 0.7271 - val_acc: 0.8833
Epoch 26/1000
331s 661ms/step - loss: 0.8121 - acc: 0.8516 - val_loss: 0.7155 - val_acc: 0.8883
Epoch 27/1000
330s 659ms/step - loss: 0.8008 - acc: 0.8543 - val_loss: 0.7522 - val_acc: 0.8707
Epoch 28/1000
330s 659ms/step - loss: 0.7959 - acc: 0.8558 - val_loss: 0.7200 - val_acc: 0.8799
Epoch 29/1000
329s 659ms/step - loss: 0.7937 - acc: 0.8563 - val_loss: 0.7196 - val_acc: 0.8855
Epoch 30/1000
329s 659ms/step - loss: 0.7843 - acc: 0.8602 - val_loss: 0.7285 - val_acc: 0.8808
Epoch 31/1000
330s 659ms/step - loss: 0.7791 - acc: 0.8606 - val_loss: 0.7120 - val_acc: 0.8879
Epoch 32/1000
329s 658ms/step - loss: 0.7781 - acc: 0.8612 - val_loss: 0.7139 - val_acc: 0.8840
Epoch 33/1000
329s 659ms/step - loss: 0.7650 - acc: 0.8660 - val_loss: 0.7243 - val_acc: 0.8814
Epoch 34/1000
329s 658ms/step - loss: 0.7686 - acc: 0.8669 - val_loss: 0.7223 - val_acc: 0.8811
Epoch 35/1000
329s 659ms/step - loss: 0.7694 - acc: 0.8662 - val_loss: 0.6995 - val_acc: 0.8938
Epoch 36/1000
329s 659ms/step - loss: 0.7618 - acc: 0.8679 - val_loss: 0.7032 - val_acc: 0.8919
Epoch 37/1000
329s 658ms/step - loss: 0.7542 - acc: 0.8709 - val_loss: 0.7115 - val_acc: 0.8868
Epoch 38/1000
329s 659ms/step - loss: 0.7525 - acc: 0.8719 - val_loss: 0.7282 - val_acc: 0.8791
Epoch 39/1000
329s 658ms/step - loss: 0.7528 - acc: 0.8727 - val_loss: 0.7046 - val_acc: 0.8898
Epoch 40/1000
329s 658ms/step - loss: 0.7476 - acc: 0.8748 - val_loss: 0.7071 - val_acc: 0.8927
Epoch 41/1000
329s 659ms/step - loss: 0.7429 - acc: 0.8762 - val_loss: 0.7195 - val_acc: 0.8852
Epoch 42/1000
329s 658ms/step - loss: 0.7487 - acc: 0.8758 - val_loss: 0.7064 - val_acc: 0.8936
Epoch 43/1000
329s 658ms/step - loss: 0.7423 - acc: 0.8769 - val_loss: 0.6941 - val_acc: 0.8959
Epoch 44/1000
330s 660ms/step - loss: 0.7357 - acc: 0.8802 - val_loss: 0.7097 - val_acc: 0.8921
Epoch 45/1000
329s 659ms/step - loss: 0.7385 - acc: 0.8796 - val_loss: 0.6845 - val_acc: 0.8997
Epoch 46/1000
332s 664ms/step - loss: 0.7331 - acc: 0.8821 - val_loss: 0.7265 - val_acc: 0.8870
Epoch 47/1000
329s 658ms/step - loss: 0.7358 - acc: 0.8804 - val_loss: 0.7062 - val_acc: 0.8947
Epoch 48/1000
329s 658ms/step - loss: 0.7357 - acc: 0.8812 - val_loss: 0.7007 - val_acc: 0.8932
Epoch 49/1000
329s 658ms/step - loss: 0.7338 - acc: 0.8832 - val_loss: 0.6983 - val_acc: 0.8989
Epoch 50/1000
330s 659ms/step - loss: 0.7268 - acc: 0.8852 - val_loss: 0.7136 - val_acc: 0.8940
Epoch 51/1000
329s 659ms/step - loss: 0.7327 - acc: 0.8845 - val_loss: 0.7076 - val_acc: 0.8957
Epoch 52/1000
329s 658ms/step - loss: 0.7255 - acc: 0.8858 - val_loss: 0.6909 - val_acc: 0.8995
Epoch 53/1000
329s 658ms/step - loss: 0.7298 - acc: 0.8860 - val_loss: 0.7088 - val_acc: 0.8956
Epoch 54/1000
331s 661ms/step - loss: 0.7237 - acc: 0.8884 - val_loss: 0.7069 - val_acc: 0.8960
Epoch 55/1000
329s 659ms/step - loss: 0.7288 - acc: 0.8863 - val_loss: 0.6929 - val_acc: 0.8984
Epoch 56/1000
329s 658ms/step - loss: 0.7208 - acc: 0.8890 - val_loss: 0.6925 - val_acc: 0.9017
Epoch 57/1000
330s 659ms/step - loss: 0.7187 - acc: 0.8898 - val_loss: 0.7150 - val_acc: 0.8944
Epoch 58/1000
330s 659ms/step - loss: 0.7148 - acc: 0.8920 - val_loss: 0.6766 - val_acc: 0.9060
Epoch 59/1000
329s 659ms/step - loss: 0.7185 - acc: 0.8905 - val_loss: 0.7031 - val_acc: 0.8951
Epoch 60/1000
329s 658ms/step - loss: 0.7192 - acc: 0.8904 - val_loss: 0.6956 - val_acc: 0.9020
Epoch 61/1000
329s 659ms/step - loss: 0.7209 - acc: 0.8895 - val_loss: 0.6967 - val_acc: 0.8997
Epoch 62/1000
329s 658ms/step - loss: 0.7083 - acc: 0.8936 - val_loss: 0.6806 - val_acc: 0.9048
Epoch 63/1000
329s 658ms/step - loss: 0.7123 - acc: 0.8926 - val_loss: 0.6832 - val_acc: 0.9042
Epoch 64/1000
329s 658ms/step - loss: 0.7085 - acc: 0.8939 - val_loss: 0.7000 - val_acc: 0.8973
Epoch 65/1000
330s 660ms/step - loss: 0.7049 - acc: 0.8963 - val_loss: 0.6993 - val_acc: 0.8969
Epoch 66/1000
330s 660ms/step - loss: 0.7062 - acc: 0.8950 - val_loss: 0.6977 - val_acc: 0.9044
Epoch 67/1000
329s 659ms/step - loss: 0.7127 - acc: 0.8938 - val_loss: 0.7069 - val_acc: 0.8977
Epoch 68/1000
329s 657ms/step - loss: 0.7010 - acc: 0.8960 - val_loss: 0.7052 - val_acc: 0.8994
Epoch 69/1000
329s 658ms/step - loss: 0.7096 - acc: 0.8934 - val_loss: 0.6982 - val_acc: 0.9027
Epoch 70/1000
329s 658ms/step - loss: 0.7118 - acc: 0.8947 - val_loss: 0.6850 - val_acc: 0.9053
Epoch 71/1000
329s 658ms/step - loss: 0.7076 - acc: 0.8953 - val_loss: 0.6798 - val_acc: 0.9053
Epoch 72/1000
329s 658ms/step - loss: 0.6998 - acc: 0.8981 - val_loss: 0.7312 - val_acc: 0.8905
Epoch 73/1000
329s 658ms/step - loss: 0.7049 - acc: 0.8969 - val_loss: 0.7042 - val_acc: 0.8996
Epoch 74/1000
329s 658ms/step - loss: 0.7094 - acc: 0.8959 - val_loss: 0.6987 - val_acc: 0.9020
Epoch 75/1000
329s 658ms/step - loss: 0.7045 - acc: 0.8963 - val_loss: 0.6909 - val_acc: 0.9056
Epoch 76/1000
329s 658ms/step - loss: 0.7028 - acc: 0.8980 - val_loss: 0.7257 - val_acc: 0.8918
Epoch 77/1000
329s 657ms/step - loss: 0.7059 - acc: 0.8956 - val_loss: 0.6813 - val_acc: 0.9097
Epoch 78/1000
329s 658ms/step - loss: 0.6996 - acc: 0.9006 - val_loss: 0.6942 - val_acc: 0.9081
Epoch 79/1000
329s 658ms/step - loss: 0.6988 - acc: 0.8996 - val_loss: 0.6801 - val_acc: 0.9059
Epoch 80/1000
329s 657ms/step - loss: 0.6983 - acc: 0.8990 - val_loss: 0.6759 - val_acc: 0.9081
Epoch 81/1000
329s 658ms/step - loss: 0.6968 - acc: 0.9012 - val_loss: 0.6868 - val_acc: 0.9052
Epoch 82/1000
329s 658ms/step - loss: 0.6976 - acc: 0.9000 - val_loss: 0.6945 - val_acc: 0.9028
Epoch 83/1000
329s 658ms/step - loss: 0.6974 - acc: 0.8999 - val_loss: 0.7020 - val_acc: 0.9010
Epoch 84/1000
329s 658ms/step - loss: 0.6984 - acc: 0.9003 - val_loss: 0.7220 - val_acc: 0.8956
Epoch 85/1000
329s 658ms/step - loss: 0.7054 - acc: 0.8974 - val_loss: 0.6846 - val_acc: 0.9082
Epoch 86/1000
329s 657ms/step - loss: 0.6974 - acc: 0.9005 - val_loss: 0.6880 - val_acc: 0.9085
Epoch 87/1000
329s 658ms/step - loss: 0.6989 - acc: 0.8998 - val_loss: 0.6956 - val_acc: 0.9035
Epoch 88/1000
329s 658ms/step - loss: 0.7002 - acc: 0.8998 - val_loss: 0.6852 - val_acc: 0.9056
Epoch 89/1000
329s 657ms/step - loss: 0.7005 - acc: 0.9004 - val_loss: 0.6891 - val_acc: 0.9077
Epoch 90/1000
329s 658ms/step - loss: 0.6945 - acc: 0.9030 - val_loss: 0.6880 - val_acc: 0.9071
Epoch 91/1000
329s 658ms/step - loss: 0.6999 - acc: 0.9011 - val_loss: 0.6868 - val_acc: 0.9064
Epoch 92/1000
329s 658ms/step - loss: 0.6976 - acc: 0.9028 - val_loss: 0.6964 - val_acc: 0.9022
Epoch 93/1000
329s 658ms/step - loss: 0.6923 - acc: 0.9045 - val_loss: 0.6712 - val_acc: 0.9126
Epoch 94/1000
329s 657ms/step - loss: 0.6981 - acc: 0.9020 - val_loss: 0.6813 - val_acc: 0.9084
Epoch 95/1000
329s 657ms/step - loss: 0.6952 - acc: 0.9029 - val_loss: 0.6713 - val_acc: 0.9135
Epoch 96/1000
329s 657ms/step - loss: 0.6958 - acc: 0.9032 - val_loss: 0.6835 - val_acc: 0.9084
Epoch 97/1000
329s 657ms/step - loss: 0.6889 - acc: 0.9045 - val_loss: 0.6818 - val_acc: 0.9053
Epoch 98/1000
329s 657ms/step - loss: 0.6997 - acc: 0.9009 - val_loss: 0.6886 - val_acc: 0.9075
Epoch 99/1000
329s 657ms/step - loss: 0.6948 - acc: 0.9029 - val_loss: 0.6872 - val_acc: 0.9057
Epoch 100/1000
329s 657ms/step - loss: 0.6925 - acc: 0.9044 - val_loss: 0.6992 - val_acc: 0.9051
Epoch 101/1000
329s 657ms/step - loss: 0.6948 - acc: 0.9033 - val_loss: 0.7153 - val_acc: 0.8990
Epoch 102/1000
328s 657ms/step - loss: 0.6998 - acc: 0.9025 - val_loss: 0.6735 - val_acc: 0.9155
Epoch 103/1000
329s 657ms/step - loss: 0.6931 - acc: 0.9037 - val_loss: 0.6907 - val_acc: 0.9052
Epoch 104/1000
329s 657ms/step - loss: 0.6941 - acc: 0.9048 - val_loss: 0.7109 - val_acc: 0.9046
Epoch 105/1000
329s 658ms/step - loss: 0.6902 - acc: 0.9066 - val_loss: 0.6744 - val_acc: 0.9114
Epoch 106/1000
329s 657ms/step - loss: 0.6944 - acc: 0.9038 - val_loss: 0.6758 - val_acc: 0.9119
Epoch 107/1000
329s 658ms/step - loss: 0.6895 - acc: 0.9036 - val_loss: 0.7041 - val_acc: 0.9024
Epoch 108/1000
329s 657ms/step - loss: 0.6976 - acc: 0.9026 - val_loss: 0.6959 - val_acc: 0.9065
Epoch 109/1000
329s 657ms/step - loss: 0.6890 - acc: 0.9063 - val_loss: 0.6866 - val_acc: 0.9125
Epoch 110/1000
329s 657ms/step - loss: 0.6897 - acc: 0.9062 - val_loss: 0.6786 - val_acc: 0.9150
Epoch 111/1000
329s 657ms/step - loss: 0.6941 - acc: 0.9061 - val_loss: 0.6870 - val_acc: 0.9100
Epoch 112/1000
329s 657ms/step - loss: 0.6916 - acc: 0.9052 - val_loss: 0.6836 - val_acc: 0.9069
Epoch 113/1000
329s 657ms/step - loss: 0.6908 - acc: 0.9040 - val_loss: 0.6841 - val_acc: 0.9108
Epoch 114/1000
329s 657ms/step - loss: 0.6907 - acc: 0.9059 - val_loss: 0.6831 - val_acc: 0.9102
Epoch 115/1000
328s 657ms/step - loss: 0.6904 - acc: 0.9063 - val_loss: 0.6852 - val_acc: 0.9101
Epoch 116/1000
329s 657ms/step - loss: 0.6940 - acc: 0.9051 - val_loss: 0.6977 - val_acc: 0.9069
Epoch 117/1000
329s 657ms/step - loss: 0.6909 - acc: 0.9075 - val_loss: 0.6904 - val_acc: 0.9110
Epoch 118/1000
329s 658ms/step - loss: 0.6888 - acc: 0.9066 - val_loss: 0.6845 - val_acc: 0.9144
Epoch 119/1000
328s 657ms/step - loss: 0.6890 - acc: 0.9058 - val_loss: 0.6954 - val_acc: 0.9076
Epoch 120/1000
329s 657ms/step - loss: 0.6893 - acc: 0.9059 - val_loss: 0.6790 - val_acc: 0.9125
Epoch 121/1000
329s 657ms/step - loss: 0.6840 - acc: 0.9082 - val_loss: 0.6813 - val_acc: 0.9088
Epoch 122/1000
329s 657ms/step - loss: 0.6966 - acc: 0.9032 - val_loss: 0.6816 - val_acc: 0.9079
Epoch 123/1000
329s 657ms/step - loss: 0.6869 - acc: 0.9093 - val_loss: 0.6970 - val_acc: 0.9057
Epoch 124/1000
329s 657ms/step - loss: 0.6907 - acc: 0.9052 - val_loss: 0.6734 - val_acc: 0.9143
Epoch 125/1000
328s 657ms/step - loss: 0.6886 - acc: 0.9074 - val_loss: 0.6811 - val_acc: 0.9102
Epoch 126/1000
329s 657ms/step - loss: 0.6862 - acc: 0.9082 - val_loss: 0.7166 - val_acc: 0.8981
Epoch 127/1000
329s 657ms/step - loss: 0.6882 - acc: 0.9078 - val_loss: 0.6780 - val_acc: 0.9125
Epoch 128/1000
329s 657ms/step - loss: 0.6972 - acc: 0.9036 - val_loss: 0.6861 - val_acc: 0.9110
Epoch 129/1000
329s 657ms/step - loss: 0.6848 - acc: 0.9099 - val_loss: 0.6905 - val_acc: 0.9069
Epoch 130/1000
329s 657ms/step - loss: 0.6868 - acc: 0.9072 - val_loss: 0.6773 - val_acc: 0.9150
Epoch 131/1000
329s 657ms/step - loss: 0.6915 - acc: 0.9070 - val_loss: 0.6852 - val_acc: 0.9113
Epoch 132/1000
329s 657ms/step - loss: 0.6879 - acc: 0.9075 - val_loss: 0.6821 - val_acc: 0.9116
Epoch 133/1000
328s 657ms/step - loss: 0.6903 - acc: 0.9068 - val_loss: 0.6854 - val_acc: 0.9102
Epoch 134/1000
329s 657ms/step - loss: 0.6888 - acc: 0.9075 - val_loss: 0.6987 - val_acc: 0.9077
Epoch 135/1000
329s 658ms/step - loss: 0.6888 - acc: 0.9074 - val_loss: 0.7070 - val_acc: 0.9055
Epoch 136/1000
329s 657ms/step - loss: 0.6909 - acc: 0.9089 - val_loss: 0.6839 - val_acc: 0.9133
Epoch 137/1000
329s 657ms/step - loss: 0.6866 - acc: 0.9087 - val_loss: 0.6873 - val_acc: 0.9091
Epoch 138/1000
328s 657ms/step - loss: 0.6916 - acc: 0.9067 - val_loss: 0.6984 - val_acc: 0.9073
Epoch 139/1000
329s 658ms/step - loss: 0.6905 - acc: 0.9070 - val_loss: 0.6842 - val_acc: 0.9137
Epoch 140/1000
331s 662ms/step - loss: 0.6863 - acc: 0.9084 - val_loss: 0.6936 - val_acc: 0.9110
Epoch 141/1000
329s 657ms/step - loss: 0.6863 - acc: 0.9090 - val_loss: 0.7201 - val_acc: 0.9043
Epoch 142/1000
328s 657ms/step - loss: 0.6855 - acc: 0.9089 - val_loss: 0.6890 - val_acc: 0.9100
Epoch 143/1000
329s 657ms/step - loss: 0.6881 - acc: 0.9071 - val_loss: 0.6997 - val_acc: 0.9022
Epoch 144/1000
329s 657ms/step - loss: 0.6860 - acc: 0.9087 - val_loss: 0.6794 - val_acc: 0.9121
Epoch 145/1000
328s 657ms/step - loss: 0.6878 - acc: 0.9079 - val_loss: 0.6919 - val_acc: 0.9096
Epoch 146/1000
329s 657ms/step - loss: 0.6860 - acc: 0.9076 - val_loss: 0.6870 - val_acc: 0.9115
Epoch 147/1000
328s 657ms/step - loss: 0.6836 - acc: 0.9112 - val_loss: 0.6757 - val_acc: 0.9140
Epoch 148/1000
329s 658ms/step - loss: 0.6876 - acc: 0.9068 - val_loss: 0.6843 - val_acc: 0.9123
Epoch 149/1000
329s 657ms/step - loss: 0.6857 - acc: 0.9086 - val_loss: 0.6851 - val_acc: 0.9114
Epoch 150/1000
329s 659ms/step - loss: 0.6832 - acc: 0.9096 - val_loss: 0.6798 - val_acc: 0.9142
Epoch 151/1000
329s 659ms/step - loss: 0.6887 - acc: 0.9081 - val_loss: 0.6761 - val_acc: 0.9161
Epoch 152/1000
329s 658ms/step - loss: 0.6850 - acc: 0.9090 - val_loss: 0.7138 - val_acc: 0.9042
Epoch 153/1000
332s 664ms/step - loss: 0.6845 - acc: 0.9101 - val_loss: 0.6783 - val_acc: 0.9137
Epoch 154/1000
330s 659ms/step - loss: 0.6879 - acc: 0.9095 - val_loss: 0.6898 - val_acc: 0.9070
Epoch 155/1000
329s 657ms/step - loss
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值