1. 知识点
- 常用的卷积网络
LeNet-5:输入数据比较小,输入数据颜色通道数为1, 没有用pad扩展,图片大小在逐渐缩小。
AlexNet:输入数据比较大,输入数据通道数为3,过滤器参数更多。
VGG-16:过滤器3*3,stride=1,same扩展模式。池化器2*2,stride=2。
- ResNet
普通神经网络:
增加short cut的神经网络:增加一个从a[l]到a[l+2]的直接连接。增加short cut后,训练误差会随着网络层数的加深逐渐减小。
- ResNet表现好的原因(没看明白。。)
- 1X1卷积:通过1X1卷积运算,调整图片维度,原图片维度为(Nh,Nw,Nc),filter维度为(1,1,Nc),输出维度为(Nh,Nw,filter个数)
- Inception Network :同一个输入,通过用多个不同卷积核,保持图片大小不变,图片通道数为各不同卷积核个数之和。
Inception Network主要结构:
用1X1卷积层作过渡层, 可以减少计算成本:
Inception Network模块:将1X1卷积核过渡和多卷积核叠加运算结合到一起
- 迁移学习:将其他研究者建立模型和参数应用到自己模型的前半部分,或代替自己模型的随机初始化,来加速自己的模型
- 数据扩充:当训练数据不充足的情况下,需要对现在数据进行扩充,常用数据扩充方法有,镜像翻转、随机剪裁、色彩转换。
- 计算机视觉的现状:数据量少时,需要更多的手工工程
2. 应用实例:用keras构建参差网络
- 实现思路:
1. 构建恒等块(输入和输出维度相同)
用1*1卷积核,步长为1,卷积运算。用BatchNorm归一化通道。用Relu激活。
用F2个过滤器,维度(f,f),步长(1,1),填充方式same,卷积运算。用BatchNorm归一化通道。用Relu激活。
用F3个过滤器,维度(1,1),步长(1,1),填充方式valid,卷积运算。用BatchNorm归一化通道。用Relu激活。
将原输入和卷积计算结果相加,用Relu激活。
2. 构建卷积块(输入和输出维度不同)
用F1个过滤器,维度(1,1),步长(s,s),填充方式valid,卷积运算。用BatchNorm归一化通道。用Relu激活。
用F2个过滤器,维度(f,f),步长(1,1),填充方式same,卷积运算。用BatchNorm归一化通道。用Relu激活。
用F3个过滤器,维度(1,1),步长(s,s),填充方式valid,卷积运算。用BatchNorm归一化通道。用Relu激活。
捷径,用F3个过滤器,维度(1,1),步长(s,s),填充方式valid,卷积运算。用BatchNorm归一化通道。
将捷径输出和卷积计算结果相加,用Relu激活。
3. 构建残差网络(可以解决卷积网络过深无法训练的问题,没理解怎么解决的????)
对输入数据进行0填充。
用64个过滤器,维度(7,7),步长(2,2),卷积运算。用BatchNorm归一化通道。用Relu激活。最大化池化层,使用(3,3)窗口和(2,2)步伐。
用卷积块和恒等块叠加运算。
用均值池化层,维度为(2,2)。
全连接运算,使用softmax()激活。
#引入keras包
import numpy as np
from keras import layers
from keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D
from keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D
from keras.models import Model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
import kt_utils
import keras.backend as K
K.set_image_data_format('channels_last')
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
%matplotlib inline
Using TensorFlow backend.
#加载数据,输入图片维度为64X64X3
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = kt_utils.load_dataset()
# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.
# Reshape
Y_train = Y_train_orig.T
Y_test = Y_test_orig.T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
number of training examples = 600 number of test examples = 150 X_train shape: (600, 64, 64, 3) Y_train shape: (600, 1) X_test shape: (150, 64, 64, 3) Y_test shape: (150, 1)
def model(input_shape):
#定义一个tensor的placeholder,维度为input_shape
X_input = Input(input_shape)
#使用0填充:X_input的周围填充0
X = ZeroPadding2D((3,3))(X_input)
# 对X使用 CONV -> BN -> RELU 块
X = Conv2D(32, (7, 7), strides = (1, 1), name = 'conv0')(X)
X = BatchNormalization(axis = 3, name = 'bn0')(X)
X = Activation('relu')(X)
#最大值池化层
X = MaxPooling2D((2,2),name="max_pool")(X)
#降维,矩阵转化为向量 + 全连接层
X = Flatten()(X)
X = Dense(1, activation='sigmoid', name='fc')(X)
#创建模型,讲话创建一个模型的实体,我们可以用它来训练、测试。
model = Model(inputs = X_input, outputs = X, name='HappyModel')
return model
def HappyModel(input_shape):
"""
实现一个检测笑容的模型
参数:
input_shape - 输入的数据的维度
返回:
model - 创建的Keras的模型
"""
#你可以参考和上面的大纲
X_input = Input(input_shape)
#使用0填充:X_input的周围填充0
X = ZeroPadding2D((3, 3))(X_input)
#对X使用 CONV -> BN -> RELU 块
X = Conv2D(32, (7, 7), strides=(1, 1), name='conv0')(X)
X = BatchNormalization(axis=3, name='bn0')(X)
X = Activation('relu')(X)
#最大值池化层
X = MaxPooling2D((2, 2), name='max_pool')(X)
#降维,矩阵转化为向量 + 全连接层
X = Flatten()(X)
X = Dense(1, activation='sigmoid', name='fc')(X)
#创建模型,讲话创建一个模型的实体,我们可以用它来训练、测试。
model = Model(inputs=X_input, outputs=X, name='HappyModel')
return model
#创建一个模型实体
happy_model = HappyModel(X_train.shape[1:])
#编译模型
happy_model.compile("adam","binary_crossentropy", metrics=['accuracy'])
#训练模型
#请注意,此操作会花费你大约6-10分钟。
happy_model.fit(X_train, Y_train, epochs=40, batch_size=50)
#评估模型
preds = happy_model.evaluate(X_test, Y_test, batch_size=32, verbose=1, sample_weight=None)
print ("误差值 = " + str(preds[0]))
print ("准确度 = " + str(preds[1]))
WARNING:tensorflow:From /usr/local/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:4070: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead. WARNING:tensorflow:From /usr/local/lib/python3.7/site-packages/tensorflow/python/ops/nn_impl.py:180: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where WARNING:tensorflow:From /usr/local/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:422: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead. Epoch 1/40 600/600 [==============================] - 5s 8ms/step - loss: 2.8213 - accuracy: 0.5367 Epoch 2/40 600/600 [==============================] - 4s 7ms/step - loss: 0.8461 - accuracy: 0.6883 Epoch 3/40 600/600 [==============================] - 4s 7ms/step - loss: 0.4058 - accuracy: 0.8317 Epoch 4/40 600/600 [==============================] - 4s 7ms/step - loss: 0.2114 - accuracy: 0.9117 Epoch 5/40 600/600 [==============================] - 4s 7ms/step - loss: 0.1566 - accuracy: 0.9433 Epoch 6/40 600/600 [==============================] - 4s 7ms/step - loss: 0.1212 - accuracy: 0.9550 Epoch 7/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0994 - accuracy: 0.9767 Epoch 8/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0882 - accuracy: 0.9783 Epoch 9/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0883 - accuracy: 0.9750 Epoch 10/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0865 - accuracy: 0.9733 Epoch 11/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0643 - accuracy: 0.9883 Epoch 12/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0654 - accuracy: 0.9850 Epoch 13/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0523 - accuracy: 0.9883 Epoch 14/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0494 - accuracy: 0.9850 Epoch 15/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0525 - accuracy: 0.9900 Epoch 16/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0458 - accuracy: 0.9883 Epoch 17/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0497 - accuracy: 0.9833 Epoch 18/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0480 - accuracy: 0.9900 Epoch 19/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0503 - accuracy: 0.9850 Epoch 20/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0357 - accuracy: 0.9900 Epoch 21/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0317 - accuracy: 0.9933 Epoch 22/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0603 - accuracy: 0.9817 Epoch 23/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0472 - accuracy: 0.9833 Epoch 24/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0729 - accuracy: 0.9717 Epoch 25/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0334 - accuracy: 0.9917 Epoch 26/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0306 - accuracy: 0.9883 Epoch 27/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0244 - accuracy: 0.9933 Epoch 28/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0245 - accuracy: 0.9950 Epoch 29/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0209 - accuracy: 0.9967 Epoch 30/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0262 - accuracy: 0.9900 Epoch 31/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0180 - accuracy: 0.9950 Epoch 32/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0171 - accuracy: 0.9950 Epoch 33/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0141 - accuracy: 0.9983 Epoch 34/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0201 - accuracy: 0.9950 Epoch 35/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0258 - accuracy: 0.9933 Epoch 36/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0198 - accuracy: 0.9967 Epoch 37/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0272 - accuracy: 0.9933 Epoch 38/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0233 - accuracy: 0.9933 Epoch 39/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0221 - accuracy: 0.9917 Epoch 40/40 600/600 [==============================] - 4s 7ms/step - loss: 0.0198 - accuracy: 0.9983 150/150 [==============================] - 0s 3ms/step 误差值 = 0.11187564154465993 准确度 = 0.9599999785423279
#网上随便找的图片,侵删
img_path = 'images/5.jpeg'
img = image.load_img(img_path, target_size=(64, 64))
imshow(img)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print(happy_model.predict(x))
[[0.]]
img_path = 'images/5.jpeg'
img = image.load_img(img_path, target_size=(64, 64))
imshow(img)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print(happy_model.predict(x))
[[0.]]
happy_model.summary()
Model: "HappyModel" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) (None, 64, 64, 3) 0 _________________________________________________________________ zero_padding2d_1 (ZeroPaddin (None, 70, 70, 3) 0 _________________________________________________________________ conv0 (Conv2D) (None, 64, 64, 32) 4736 _________________________________________________________________ bn0 (BatchNormalization) (None, 64, 64, 32) 128 _________________________________________________________________ activation_1 (Activation) (None, 64, 64, 32) 0 _________________________________________________________________ max_pool (MaxPooling2D) (None, 32, 32, 32) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 32768) 0 _________________________________________________________________ fc (Dense) (None, 1) 32769 ================================================================= Total params: 37,633 Trainable params: 37,569 Non-trainable params: 64 _________________________________________________________________
%matplotlib inline
plot_model(happy_model, to_file='happy_model.png')
SVG(model_to_dot(happy_model).create(prog='dot', format='svg'))
import numpy as np
import tensorflow as tf
from keras import layers
from keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D
from keras.models import Model, load_model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from keras.initializers import glorot_uniform
import pydot
from IPython.display import SVG
import scipy.misc
from matplotlib.pyplot import imshow
import keras.backend as K
K.set_image_data_format('channels_last')
K.set_learning_phase(1)
import resnets_utils
def identity_block(X, f, filters, stage, block):
"""
实现图3的恒等块
参数:
X - 输入的tensor类型的数据,维度为( m, n_H_prev, n_W_prev, n_H_prev )
f - 整数,指定主路径中间的CONV窗口的维度
filters - 整数列表,定义了主路径每层的卷积层的过滤器数量
stage - 整数,根据每层的位置来命名每一层,与block参数一起使用。
block - 字符串,据每层的位置来命名每一层,与stage参数一起使用。
返回:
X - 恒等块的输出,tensor类型,维度为(n_H, n_W, n_C)
"""
#定义命名规则
conv_name_base = "res" + str(stage) + block + "_branch"
bn_name_base = "bn" + str(stage) + block + "_branch"
#获取过滤器
F1, F2, F3 = filters
#保存输入数据,将会用于为主路径添加捷径
X_shortcut = X
#主路径的第一部分
##卷积层
#过滤器个数,
X = Conv2D(filters=F1, kernel_size=(1,1), strides=(1,1) ,padding="valid",
name=conv_name_base+"2a", kernel_initializer=glorot_uniform(seed=0))(X)
##归一化
X = BatchNormalization(axis=3,name=bn_name_base+"2a")(X)
##使用ReLU激活函数
X = Activation("relu")(X)
#主路径的第二部分
##卷积层
X = Conv2D(filters=F2, kernel_size=(f,f),strides=(1,1), padding="same",
name=conv_name_base+"2b", kernel_initializer=glorot_uniform(seed=0))(X)
##归一化
X = BatchNormalization(axis=3,name=bn_name_base+"2b")(X)
##使用ReLU激活函数
X = Activation("relu")(X)
#主路径的第三部分
##卷积层
X = Conv2D(filters=F3, kernel_size=(1,1), strides=(1,1), padding="valid",
name=conv_name_base+"2c", kernel_initializer=glorot_uniform(seed=0))(X)
##归一化
X = BatchNormalization(axis=3,name=bn_name_base+"2c")(X)
##没有ReLU激活函数
#最后一步:
##将捷径与输入加在一起
X = Add()([X,X_shortcut])
##使用ReLU激活函数
X = Activation("relu")(X)
return X
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(1)
A_prev = tf.placeholder("float",[3,4,4,6])
X = np.random.randn(3,4,4,6)
A = identity_block(A_prev,f=2,filters=[2,4,6],stage=1,block="a")
test.run(tf.global_variables_initializer())
out = test.run([A],feed_dict={A_prev:X,K.learning_phase():0})
print("out = " + str(out[0][1][1][0]))
test.close()
out = [0.94822997 0. 1.1610146 2.747859 0. 1.36677 ]
def convolutional_block(X, f, filters, stage, block, s=2):
"""
实现图5的卷积块
参数:
X - 输入的tensor类型的变量,维度为( m, n_H_prev, n_W_prev, n_C_prev)
f - 整数,指定主路径中间的CONV窗口的维度
filters - 整数列表,定义了主路径每层的卷积层的过滤器数量
stage - 整数,根据每层的位置来命名每一层,与block参数一起使用。
block - 字符串,据每层的位置来命名每一层,与stage参数一起使用。
s - 整数,指定要使用的步幅
返回:
X - 卷积块的输出,tensor类型,维度为(n_H, n_W, n_C)
"""
#定义命名规则
conv_name_base = "res" + str(stage) + block + "_branch"
bn_name_base = "bn" + str(stage) + block + "_branch"
#获取过滤器数量
F1, F2, F3 = filters
#保存输入数据
X_shortcut = X
#主路径
##主路径第一部分
X = Conv2D(filters=F1, kernel_size=(1,1), strides=(s,s), padding="valid",
name=conv_name_base+"2a", kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3,name=bn_name_base+"2a")(X)
X = Activation("relu")(X)
##主路径第二部分
X = Conv2D(filters=F2, kernel_size=(f,f), strides=(1,1), padding="same",
name=conv_name_base+"2b", kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3,name=bn_name_base+"2b")(X)
X = Activation("relu")(X)
##主路径第三部分
X = Conv2D(filters=F3, kernel_size=(1,1), strides=(1,1), padding="valid",
name=conv_name_base+"2c", kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3,name=bn_name_base+"2c")(X)
#捷径
X_shortcut = Conv2D(filters=F3, kernel_size=(1,1), strides=(s,s), padding="valid",
name=conv_name_base+"1", kernel_initializer=glorot_uniform(seed=0))(X_shortcut)
X_shortcut = BatchNormalization(axis=3,name=bn_name_base+"1")(X_shortcut)
#最后一步
X = Add()([X,X_shortcut])
X = Activation("relu")(X)
return X
#为什么输入有样本个数维度3,输出没有样本维度???????
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(1)
A_prev = tf.placeholder("float",[3,4,4,6])
X = np.random.randn(3,4,4,6)
#print("X=",X)
A = convolutional_block(A_prev,f=2,filters=[2,4,6],stage=1,block="a")
test.run(tf.global_variables_initializer())
out = test.run([A],feed_dict={A_prev:X,K.learning_phase():0})
print("out = " + str(out[0][1][1][0]))
test.close()
out = [0.09018461 1.2348979 0.4682202 0.03671762 0. 0.65516603]
def ResNet50(input_shape=(64,64,3),classes=6):
"""
实现ResNet50
CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3
-> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> TOPLAYER
参数:
input_shape - 图像数据集的维度
classes - 整数,分类数
返回:
model - Keras框架的模型
"""
#定义tensor类型的输入数据
X_input = Input(input_shape)
#0填充
X = ZeroPadding2D((3,3))(X_input)
#stage1
X = Conv2D(filters=64, kernel_size=(7,7), strides=(2,2), name="conv1",
kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name="bn_conv1")(X)
X = Activation("relu")(X)
X = MaxPooling2D(pool_size=(3,3), strides=(2,2))(X)
#stage2
X = convolutional_block(X, f=3, filters=[64,64,256], stage=2, block="a", s=1)
X = identity_block(X, f=3, filters=[64,64,256], stage=2, block="b")
X = identity_block(X, f=3, filters=[64,64,256], stage=2, block="c")
#stage3
X = convolutional_block(X, f=3, filters=[128,128,512], stage=3, block="a", s=2)
X = identity_block(X, f=3, filters=[128,128,512], stage=3, block="b")
X = identity_block(X, f=3, filters=[128,128,512], stage=3, block="c")
X = identity_block(X, f=3, filters=[128,128,512], stage=3, block="d")
#stage4
X = convolutional_block(X, f=3, filters=[256,256,1024], stage=4, block="a", s=2)
X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block="b")
X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block="c")
X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block="d")
X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block="e")
X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block="f")
#stage5
X = convolutional_block(X, f=3, filters=[512,512,2048], stage=5, block="a", s=2)
X = identity_block(X, f=3, filters=[512,512,2048], stage=5, block="b")
X = identity_block(X, f=3, filters=[512,512,2048], stage=5, block="c")
#均值池化层
X = AveragePooling2D(pool_size=(2,2),padding="same")(X)
#输出层
X = Flatten()(X)
X = Dense(classes, activation="softmax", name="fc"+str(classes),
kernel_initializer=glorot_uniform(seed=0))(X)
#创建模型
model = Model(inputs=X_input, outputs=X, name="ResNet50")
return model
model = ResNet50(input_shape=(64,64,3),classes=6)
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])
WARNING:tensorflow:From /usr/local/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:4074: The name tf.nn.avg_pool is deprecated. Please use tf.nn.avg_pool2d instead.
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = resnets_utils.load_dataset()
# Normalize image vectors
X_train = X_train_orig / 255.
X_test = X_test_orig / 255.
# Convert training and test labels to one hot matrices
Y_train = resnets_utils.convert_to_one_hot(Y_train_orig, 6).T
Y_test = resnets_utils.convert_to_one_hot(Y_test_orig, 6).T
print("number of training examples = " + str(X_train.shape[0]))
print("number of test examples = " + str(X_test.shape[0]))
print("X_train shape: " + str(X_train.shape))
print("Y_train shape: " + str(Y_train.shape))
print("X_test shape: " + str(X_test.shape))
print("Y_test shape: " + str(Y_test.shape))
number of training examples = 1080 number of test examples = 120 X_train shape: (1080, 64, 64, 3) Y_train shape: (1080, 6) X_test shape: (120, 64, 64, 3) Y_test shape: (120, 6)
model.fit(X_train,Y_train,epochs=2,batch_size=32)
Epoch 1/2 1080/1080 [==============================] - 148s 137ms/step - loss: 2.1410 - accuracy: 0.4769 Epoch 2/2 1080/1080 [==============================] - 143s 133ms/step - loss: 0.6991 - accuracy: 0.8009Out[21]:
<keras.callbacks.callbacks.History at 0x1392dac10>
preds = model.evaluate(X_test,Y_test)
print("误差值 = " + str(preds[0]))
print("准确率 = " + str(preds[1]))
120/120 [==============================] - 2s 16ms/step 误差值 = 2.1246647198994952 准确率 = 0.1666666716337204
#加载训练好的模型,网络上没找到模型文件
model = load_model("ResNet50.h5")
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt # plt 用于显示图片
import imageio
%matplotlib inline
img_path = 'images/fingers_big/2.jpg'
my_image = image.load_img(img_path, target_size=(64, 64))
my_image = image.img_to_array(my_image)
my_image = np.expand_dims(my_image,axis=0)
#my_image = preprocess_input(my_image)
my_image = my_image/255
print("my_image.shape = " + str(my_image.shape))
print("class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = ")
#print(model.predict(my_image))
print(np.argmax(model.predict(my_image)))
#my_image = scipy.misc.imread(img_path)
my_image = imageio.imread(img_path)
plt.imshow(my_image)
my_image.shape = (1, 64, 64, 3) class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = 1Out[27]:
<matplotlib.image.AxesImage at 0x132d35650>
model.summary()
Model: "ResNet50" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) (None, 64, 64, 3) 0 __________________________________________________________________________________________________ zero_padding2d_1 (ZeroPadding2D (None, 70, 70, 3) 0 input_1[0][0] __________________________________________________________________________________________________ conv1 (Conv2D) (None, 32, 32, 64) 9472 zero_padding2d_1[0][0] __________________________________________________________________________________________________ bn_conv1 (BatchNormalization) (None, 32, 32, 64) 256 conv1[0][0] __________________________________________________________________________________________________ activation_4 (Activation) (None, 32, 32, 64) 0 bn_conv1[0][0] __________________________________________________________________________________________________ max_pooling2d_1 (MaxPooling2D) (None, 15, 15, 64) 0 activation_4[0][0] __________________________________________________________________________________________________ res2a_branch2a (Conv2D) (None, 15, 15, 64) 4160 max_pooling2d_1[0][0] __________________________________________________________________________________________________ bn2a_branch2a (BatchNormalizati (None, 15, 15, 64) 256 res2a_branch2a[0][0] __________________________________________________________________________________________________ activation_5 (Activation) (None, 15, 15, 64) 0 bn2a_branch2a[0][0] __________________________________________________________________________________________________ res2a_branch2b (Conv2D) (None, 15, 15, 64) 36928 activation_5[0][0] __________________________________________________________________________________________________ bn2a_branch2b (BatchNormalizati (None, 15, 15, 64) 256 res2a_branch2b[0][0] __________________________________________________________________________________________________ activation_6 (Activation) (None, 15, 15, 64) 0 bn2a_branch2b[0][0] __________________________________________________________________________________________________ res2a_branch2c (Conv2D) (None, 15, 15, 256) 16640 activation_6[0][0] __________________________________________________________________________________________________ res2a_branch1 (Conv2D) (None, 15, 15, 256) 16640 max_pooling2d_1[0][0] __________________________________________________________________________________________________ bn2a_branch2c (BatchNormalizati (None, 15, 15, 256) 1024 res2a_branch2c[0][0] __________________________________________________________________________________________________ bn2a_branch1 (BatchNormalizatio (None, 15, 15, 256) 1024 res2a_branch1[0][0] __________________________________________________________________________________________________ add_2 (Add) (None, 15, 15, 256) 0 bn2a_branch2c[0][0] bn2a_branch1[0][0] __________________________________________________________________________________________________ activation_7 (Activation) (None, 15, 15, 256) 0 add_2[0][0] __________________________________________________________________________________________________ res2b_branch2a (Conv2D) (None, 15, 15, 64) 16448 activation_7[0][0] __________________________________________________________________________________________________ bn2b_branch2a (BatchNormalizati (None, 15, 15, 64) 256 res2b_branch2a[0][0] __________________________________________________________________________________________________ activation_8 (Activation) (None, 15, 15, 64) 0 bn2b_branch2a[0][0] __________________________________________________________________________________________________ res2b_branch2b (Conv2D) (None, 15, 15, 64) 36928 activation_8[0][0] __________________________________________________________________________________________________ bn2b_branch2b (BatchNormalizati (None, 15, 15, 64) 256 res2b_branch2b[0][0] __________________________________________________________________________________________________ activation_9 (Activation) (None, 15, 15, 64) 0 bn2b_branch2b[0][0] __________________________________________________________________________________________________ res2b_branch2c (Conv2D) (None, 15, 15, 256) 16640 activation_9[0][0] __________________________________________________________________________________________________ bn2b_branch2c (BatchNormalizati (None, 15, 15, 256) 1024 res2b_branch2c[0][0] __________________________________________________________________________________________________ add_3 (Add) (None, 15, 15, 256) 0 bn2b_branch2c[0][0] activation_7[0][0] __________________________________________________________________________________________________ activation_10 (Activation) (None, 15, 15, 256) 0 add_3[0][0] __________________________________________________________________________________________________ res2c_branch2a (Conv2D) (None, 15, 15, 64) 16448 activation_10[0][0] __________________________________________________________________________________________________ bn2c_branch2a (BatchNormalizati (None, 15, 15, 64) 256 res2c_branch2a[0][0] __________________________________________________________________________________________________ activation_11 (Activation) (None, 15, 15, 64) 0 bn2c_branch2a[0][0] __________________________________________________________________________________________________ res2c_branch2b (Conv2D) (None, 15, 15, 64) 36928 activation_11[0][0] __________________________________________________________________________________________________ bn2c_branch2b (BatchNormalizati (None, 15, 15, 64) 256 res2c_branch2b[0][0] __________________________________________________________________________________________________ activation_12 (Activation) (None, 15, 15, 64) 0 bn2c_branch2b[0][0] __________________________________________________________________________________________________ res2c_branch2c (Conv2D) (None, 15, 15, 256) 16640 activation_12[0][0] __________________________________________________________________________________________________ bn2c_branch2c (BatchNormalizati (None, 15, 15, 256) 1024 res2c_branch2c[0][0] __________________________________________________________________________________________________ add_4 (Add) (None, 15, 15, 256) 0 bn2c_branch2c[0][0] activation_10[0][0] __________________________________________________________________________________________________ activation_13 (Activation) (None, 15, 15, 256) 0 add_4[0][0] __________________________________________________________________________________________________ res3a_branch2a (Conv2D) (None, 8, 8, 128) 32896 activation_13[0][0] __________________________________________________________________________________________________ bn3a_branch2a (BatchNormalizati (None, 8, 8, 128) 512 res3a_branch2a[0][0] __________________________________________________________________________________________________ activation_14 (Activation) (None, 8, 8, 128) 0 bn3a_branch2a[0][0] __________________________________________________________________________________________________ res3a_branch2b (Conv2D) (None, 8, 8, 128) 147584 activation_14[0][0] __________________________________________________________________________________________________ bn3a_branch2b (BatchNormalizati (None, 8, 8, 128) 512 res3a_branch2b[0][0] __________________________________________________________________________________________________ activation_15 (Activation) (None, 8, 8, 128) 0 bn3a_branch2b[0][0] __________________________________________________________________________________________________ res3a_branch2c (Conv2D) (None, 8, 8, 512) 66048 activation_15[0][0] __________________________________________________________________________________________________ res3a_branch1 (Conv2D) (None, 8, 8, 512) 131584 activation_13[0][0] __________________________________________________________________________________________________ bn3a_branch2c (BatchNormalizati (None, 8, 8, 512) 2048 res3a_branch2c[0][0] __________________________________________________________________________________________________ bn3a_branch1 (BatchNormalizatio (None, 8, 8, 512) 2048 res3a_branch1[0][0] __________________________________________________________________________________________________ add_5 (Add) (None, 8, 8, 512) 0 bn3a_branch2c[0][0] bn3a_branch1[0][0] __________________________________________________________________________________________________ activation_16 (Activation) (None, 8, 8, 512) 0 add_5[0][0] __________________________________________________________________________________________________ res3b_branch2a (Conv2D) (None, 8, 8, 128) 65664 activation_16[0][0] __________________________________________________________________________________________________ bn3b_branch2a (BatchNormalizati (None, 8, 8, 128) 512 res3b_branch2a[0][0] __________________________________________________________________________________________________ activation_17 (Activation) (None, 8, 8, 128) 0 bn3b_branch2a[0][0] __________________________________________________________________________________________________ res3b_branch2b (Conv2D) (None, 8, 8, 128) 147584 activation_17[0][0] __________________________________________________________________________________________________ bn3b_branch2b (BatchNormalizati (None, 8, 8, 128) 512 res3b_branch2b[0][0] __________________________________________________________________________________________________ activation_18 (Activation) (None, 8, 8, 128) 0 bn3b_branch2b[0][0] __________________________________________________________________________________________________ res3b_branch2c (Conv2D) (None, 8, 8, 512) 66048 activation_18[0][0] __________________________________________________________________________________________________ bn3b_branch2c (BatchNormalizati (None, 8, 8, 512) 2048 res3b_branch2c[0][0] __________________________________________________________________________________________________ add_6 (Add) (None, 8, 8, 512) 0 bn3b_branch2c[0][0] activation_16[0][0] __________________________________________________________________________________________________ activation_19 (Activation) (None, 8, 8, 512) 0 add_6[0][0] __________________________________________________________________________________________________ res3c_branch2a (Conv2D) (None, 8, 8, 128) 65664 activation_19[0][0] __________________________________________________________________________________________________ bn3c_branch2a (BatchNormalizati (None, 8, 8, 128) 512 res3c_branch2a[0][0] __________________________________________________________________________________________________ activation_20 (Activation) (None, 8, 8, 128) 0 bn3c_branch2a[0][0] __________________________________________________________________________________________________ res3c_branch2b (Conv2D) (None, 8, 8, 128) 147584 activation_20[0][0] __________________________________________________________________________________________________ bn3c_branch2b (BatchNormalizati (None, 8, 8, 128) 512 res3c_branch2b[0][0] __________________________________________________________________________________________________ activation_21 (Activation) (None, 8, 8, 128) 0 bn3c_branch2b[0][0] __________________________________________________________________________________________________ res3c_branch2c (Conv2D) (None, 8, 8, 512) 66048 activation_21[0][0] __________________________________________________________________________________________________ bn3c_branch2c (BatchNormalizati (None, 8, 8, 512) 2048 res3c_branch2c[0][0] __________________________________________________________________________________________________ add_7 (Add) (None, 8, 8, 512) 0 bn3c_branch2c[0][0] activation_19[0][0] __________________________________________________________________________________________________ activation_22 (Activation) (None, 8, 8, 512) 0 add_7[0][0] __________________________________________________________________________________________________ res3d_branch2a (Conv2D) (None, 8, 8, 128) 65664 activation_22[0][0] __________________________________________________________________________________________________ bn3d_branch2a (BatchNormalizati (None, 8, 8, 128) 512 res3d_branch2a[0][0] __________________________________________________________________________________________________ activation_23 (Activation) (None, 8, 8, 128) 0 bn3d_branch2a[0][0] __________________________________________________________________________________________________ res3d_branch2b (Conv2D) (None, 8, 8, 128) 147584 activation_23[0][0] __________________________________________________________________________________________________ bn3d_branch2b (BatchNormalizati (None, 8, 8, 128) 512 res3d_branch2b[0][0] __________________________________________________________________________________________________ activation_24 (Activation) (None, 8, 8, 128) 0 bn3d_branch2b[0][0] __________________________________________________________________________________________________ res3d_branch2c (Conv2D) (None, 8, 8, 512) 66048 activation_24[0][0] __________________________________________________________________________________________________ bn3d_branch2c (BatchNormalizati (None, 8, 8, 512) 2048 res3d_branch2c[0][0] __________________________________________________________________________________________________ add_8 (Add) (None, 8, 8, 512) 0 bn3d_branch2c[0][0] activation_22[0][0] __________________________________________________________________________________________________ activation_25 (Activation) (None, 8, 8, 512) 0 add_8[0][0] __________________________________________________________________________________________________ res4a_branch2a (Conv2D) (None, 4, 4, 256) 131328 activation_25[0][0] __________________________________________________________________________________________________ bn4a_branch2a (BatchNormalizati (None, 4, 4, 256) 1024 res4a_branch2a[0][0] __________________________________________________________________________________________________ activation_26 (Activation) (None, 4, 4, 256) 0 bn4a_branch2a[0][0] __________________________________________________________________________________________________ res4a_branch2b (Conv2D) (None, 4, 4, 256) 590080 activation_26[0][0] __________________________________________________________________________________________________ bn4a_branch2b (BatchNormalizati (None, 4, 4, 256) 1024 res4a_branch2b[0][0] __________________________________________________________________________________________________ activation_27 (Activation) (None, 4, 4, 256) 0 bn4a_branch2b[0][0] __________________________________________________________________________________________________ res4a_branch2c (Conv2D) (None, 4, 4, 1024) 263168 activation_27[0][0] __________________________________________________________________________________________________ res4a_branch1 (Conv2D) (None, 4, 4, 1024) 525312 activation_25[0][0] __________________________________________________________________________________________________ bn4a_branch2c (BatchNormalizati (None, 4, 4, 1024) 4096 res4a_branch2c[0][0] __________________________________________________________________________________________________ bn4a_branch1 (BatchNormalizatio (None, 4, 4, 1024) 4096 res4a_branch1[0][0] __________________________________________________________________________________________________ add_9 (Add) (None, 4, 4, 1024) 0 bn4a_branch2c[0][0] bn4a_branch1[0][0] __________________________________________________________________________________________________ activation_28 (Activation) (None, 4, 4, 1024) 0 add_9[0][0] __________________________________________________________________________________________________ res4b_branch2a (Conv2D) (None, 4, 4, 256) 262400 activation_28[0][0] __________________________________________________________________________________________________ bn4b_branch2a (BatchNormalizati (None, 4, 4, 256) 1024 res4b_branch2a[0][0] __________________________________________________________________________________________________ activation_29 (Activation) (None, 4, 4, 256) 0 bn4b_branch2a[0][0] __________________________________________________________________________________________________ res4b_branch2b (Conv2D) (None, 4, 4, 256) 590080 activation_29[0][0] __________________________________________________________________________________________________ bn4b_branch2b (BatchNormalizati (None, 4, 4, 256) 1024 res4b_branch2b[0][0] __________________________________________________________________________________________________ activation_30 (Activation) (None, 4, 4, 256) 0 bn4b_branch2b[0][0] __________________________________________________________________________________________________ res4b_branch2c (Conv2D) (None, 4, 4, 1024) 263168 activation_30[0][0] __________________________________________________________________________________________________ bn4b_branch2c (BatchNormalizati (None, 4, 4, 1024) 4096 res4b_branch2c[0][0] __________________________________________________________________________________________________ add_10 (Add) (None, 4, 4, 1024) 0 bn4b_branch2c[0][0] activation_28[0][0] __________________________________________________________________________________________________ activation_31 (Activation) (None, 4, 4, 1024) 0 add_10[0][0] __________________________________________________________________________________________________ res4c_branch2a (Conv2D) (None, 4, 4, 256) 262400 activation_31[0][0] __________________________________________________________________________________________________ bn4c_branch2a (BatchNormalizati (None, 4, 4, 256) 1024 res4c_branch2a[0][0] __________________________________________________________________________________________________ activation_32 (Activation) (None, 4, 4, 256) 0 bn4c_branch2a[0][0] __________________________________________________________________________________________________ res4c_branch2b (Conv2D) (None, 4, 4, 256) 590080 activation_32[0][0] __________________________________________________________________________________________________ bn4c_branch2b (BatchNormalizati (None, 4, 4, 256) 1024 res4c_branch2b[0][0] __________________________________________________________________________________________________ activation_33 (Activation) (None, 4, 4, 256) 0 bn4c_branch2b[0][0] __________________________________________________________________________________________________ res4c_branch2c (Conv2D) (None, 4, 4, 1024) 263168 activation_33[0][0] __________________________________________________________________________________________________ bn4c_branch2c (BatchNormalizati (None, 4, 4, 1024) 4096 res4c_branch2c[0][0] __________________________________________________________________________________________________ add_11 (Add) (None, 4, 4, 1024) 0 bn4c_branch2c[0][0] activation_31[0][0] __________________________________________________________________________________________________ activation_34 (Activation) (None, 4, 4, 1024) 0 add_11[0][0] __________________________________________________________________________________________________ res4d_branch2a (Conv2D) (None, 4, 4, 256) 262400 activation_34[0][0] __________________________________________________________________________________________________ bn4d_branch2a (BatchNormalizati (None, 4, 4, 256) 1024 res4d_branch2a[0][0] __________________________________________________________________________________________________ activation_35 (Activation) (None, 4, 4, 256) 0 bn4d_branch2a[0][0] __________________________________________________________________________________________________ res4d_branch2b (Conv2D) (None, 4, 4, 256) 590080 activation_35[0][0] __________________________________________________________________________________________________ bn4d_branch2b (BatchNormalizati (None, 4, 4, 256) 1024 res4d_branch2b[0][0] __________________________________________________________________________________________________ activation_36 (Activation) (None, 4, 4, 256) 0 bn4d_branch2b[0][0] __________________________________________________________________________________________________ res4d_branch2c (Conv2D) (None, 4, 4, 1024) 263168 activation_36[0][0] __________________________________________________________________________________________________ bn4d_branch2c (BatchNormalizati (None, 4, 4, 1024) 4096 res4d_branch2c[0][0] __________________________________________________________________________________________________ add_12 (Add) (None, 4, 4, 1024) 0 bn4d_branch2c[0][0] activation_34[0][0] __________________________________________________________________________________________________ activation_37 (Activation) (None, 4, 4, 1024) 0 add_12[0][0] __________________________________________________________________________________________________ res4e_branch2a (Conv2D) (None, 4, 4, 256) 262400 activation_37[0][0] __________________________________________________________________________________________________ bn4e_branch2a (BatchNormalizati (None, 4, 4, 256) 1024 res4e_branch2a[0][0] __________________________________________________________________________________________________ activation_38 (Activation) (None, 4, 4, 256) 0 bn4e_branch2a[0][0] __________________________________________________________________________________________________ res4e_branch2b (Conv2D) (None, 4, 4, 256) 590080 activation_38[0][0] __________________________________________________________________________________________________ bn4e_branch2b (BatchNormalizati (None, 4, 4, 256) 1024 res4e_branch2b[0][0] __________________________________________________________________________________________________ activation_39 (Activation) (None, 4, 4, 256) 0 bn4e_branch2b[0][0] __________________________________________________________________________________________________ res4e_branch2c (Conv2D) (None, 4, 4, 1024) 263168 activation_39[0][0] __________________________________________________________________________________________________ bn4e_branch2c (BatchNormalizati (None, 4, 4, 1024) 4096 res4e_branch2c[0][0] __________________________________________________________________________________________________ add_13 (Add) (None, 4, 4, 1024) 0 bn4e_branch2c[0][0] activation_37[0][0] __________________________________________________________________________________________________ activation_40 (Activation) (None, 4, 4, 1024) 0 add_13[0][0] __________________________________________________________________________________________________ res4f_branch2a (Conv2D) (None, 4, 4, 256) 262400 activation_40[0][0] __________________________________________________________________________________________________ bn4f_branch2a (BatchNormalizati (None, 4, 4, 256) 1024 res4f_branch2a[0][0] __________________________________________________________________________________________________ activation_41 (Activation) (None, 4, 4, 256) 0 bn4f_branch2a[0][0] __________________________________________________________________________________________________ res4f_branch2b (Conv2D) (None, 4, 4, 256) 590080 activation_41[0][0] __________________________________________________________________________________________________ bn4f_branch2b (BatchNormalizati (None, 4, 4, 256) 1024 res4f_branch2b[0][0] __________________________________________________________________________________________________ activation_42 (Activation) (None, 4, 4, 256) 0 bn4f_branch2b[0][0] __________________________________________________________________________________________________ res4f_branch2c (Conv2D) (None, 4, 4, 1024) 263168 activation_42[0][0] __________________________________________________________________________________________________ bn4f_branch2c (BatchNormalizati (None, 4, 4, 1024) 4096 res4f_branch2c[0][0] __________________________________________________________________________________________________ add_14 (Add) (None, 4, 4, 1024) 0 bn4f_branch2c[0][0] activation_40[0][0] __________________________________________________________________________________________________ activation_43 (Activation) (None, 4, 4, 1024) 0 add_14[0][0] __________________________________________________________________________________________________ res5a_branch2a (Conv2D) (None, 2, 2, 512) 524800 activation_43[0][0] __________________________________________________________________________________________________ bn5a_branch2a (BatchNormalizati (None, 2, 2, 512) 2048 res5a_branch2a[0][0] __________________________________________________________________________________________________ activation_44 (Activation) (None, 2, 2, 512) 0 bn5a_branch2a[0][0] __________________________________________________________________________________________________ res5a_branch2b (Conv2D) (None, 2, 2, 512) 2359808 activation_44[0][0] __________________________________________________________________________________________________ bn5a_branch2b (BatchNormalizati (None, 2, 2, 512) 2048 res5a_branch2b[0][0] __________________________________________________________________________________________________ activation_45 (Activation) (None, 2, 2, 512) 0 bn5a_branch2b[0][0] __________________________________________________________________________________________________ res5a_branch2c (Conv2D) (None, 2, 2, 2048) 1050624 activation_45[0][0] __________________________________________________________________________________________________ res5a_branch1 (Conv2D) (None, 2, 2, 2048) 2099200 activation_43[0][0] __________________________________________________________________________________________________ bn5a_branch2c (BatchNormalizati (None, 2, 2, 2048) 8192 res5a_branch2c[0][0] __________________________________________________________________________________________________ bn5a_branch1 (BatchNormalizatio (None, 2, 2, 2048) 8192 res5a_branch1[0][0] __________________________________________________________________________________________________ add_15 (Add) (None, 2, 2, 2048) 0 bn5a_branch2c[0][0] bn5a_branch1[0][0] __________________________________________________________________________________________________ activation_46 (Activation) (None, 2, 2, 2048) 0 add_15[0][0] __________________________________________________________________________________________________ res5b_branch2a (Conv2D) (None, 2, 2, 512) 1049088 activation_46[0][0] __________________________________________________________________________________________________ bn5b_branch2a (BatchNormalizati (None, 2, 2, 512) 2048 res5b_branch2a[0][0] __________________________________________________________________________________________________ activation_47 (Activation) (None, 2, 2, 512) 0 bn5b_branch2a[0][0] __________________________________________________________________________________________________ res5b_branch2b (Conv2D) (None, 2, 2, 512) 2359808 activation_47[0][0] __________________________________________________________________________________________________ bn5b_branch2b (BatchNormalizati (None, 2, 2, 512) 2048 res5b_branch2b[0][0] __________________________________________________________________________________________________ activation_48 (Activation) (None, 2, 2, 512) 0 bn5b_branch2b[0][0] __________________________________________________________________________________________________ res5b_branch2c (Conv2D) (None, 2, 2, 2048) 1050624 activation_48[0][0] __________________________________________________________________________________________________ bn5b_branch2c (BatchNormalizati (None, 2, 2, 2048) 8192 res5b_branch2c[0][0] __________________________________________________________________________________________________ add_16 (Add) (None, 2, 2, 2048) 0 bn5b_branch2c[0][0] activation_46[0][0] __________________________________________________________________________________________________ activation_49 (Activation) (None, 2, 2, 2048) 0 add_16[0][0] __________________________________________________________________________________________________ res5c_branch2a (Conv2D) (None, 2, 2, 512) 1049088 activation_49[0][0] __________________________________________________________________________________________________ bn5c_branch2a (BatchNormalizati (None, 2, 2, 512) 2048 res5c_branch2a[0][0] __________________________________________________________________________________________________ activation_50 (Activation) (None, 2, 2, 512) 0 bn5c_branch2a[0][0] __________________________________________________________________________________________________ res5c_branch2b (Conv2D) (None, 2, 2, 512) 2359808 activation_50[0][0] __________________________________________________________________________________________________ bn5c_branch2b (BatchNormalizati (None, 2, 2, 512) 2048 res5c_branch2b[0][0] __________________________________________________________________________________________________ activation_51 (Activation) (None, 2, 2, 512) 0 bn5c_branch2b[0][0] __________________________________________________________________________________________________ res5c_branch2c (Conv2D) (None, 2, 2, 2048) 1050624 activation_51[0][0] __________________________________________________________________________________________________ bn5c_branch2c (BatchNormalizati (None, 2, 2, 2048) 8192 res5c_branch2c[0][0] __________________________________________________________________________________________________ add_17 (Add) (None, 2, 2, 2048) 0 bn5c_branch2c[0][0] activation_49[0][0] __________________________________________________________________________________________________ activation_52 (Activation) (None, 2, 2, 2048) 0 add_17[0][0] __________________________________________________________________________________________________ average_pooling2d_1 (AveragePoo (None, 1, 1, 2048) 0 activation_52[0][0] __________________________________________________________________________________________________ flatten_1 (Flatten) (None, 2048) 0 average_pooling2d_1[0][0] __________________________________________________________________________________________________ fc6 (Dense) (None, 6) 12294 flatten_1[0][0] ================================================================================================== Total params: 23,600,006 Trainable params: 23,546,886 Non-trainable params: 53,120 __________________________________________________________________________________________________
plot_model(model, to_file='model.png')
SVG(model_to_dot(model).create(prog='dot', format='svg'))