lenet5卷积神经网络_卷积神经网络2LeNet 模型应用

LeNet是1998年被提出应用,是卷积神经网络的开篇之作,当时还没有BN操作,也没有dropout操作,而且主流的激活函数是sigmoid函数;

e663ea6172f8b41937cce5f2acc33cf2.png

基于之前demo,代码如下:

import tensorflow as tfimport osimport numpy as npfrom matplotlib import pyplot as pltfrom tensorflow.keras.layers import Conv2D, BatchNormalization, Activation, MaxPool2D, Dropout, Flatten, Densefrom tensorflow.keras import Modelnp.set_printoptions(threshold=np.inf)cifar10 = tf.keras.datasets.cifar10(x_train, y_train), (x_test, y_test) = cifar10.load_data()x_train, x_test = x_train / 255.0, x_test / 255.0class LeNet5(Model):    def __init__(self):        super(LeNet5,self).__init__()        self.c1 = Conv2D(filters=6,kernel_size=(5,5),                         activation='sigmoid')        self.p1 = MaxPool2D(pool_size=(2, 2),strides=2)        self.c2 = Conv2D(filters=16,kernel_size=(5,5),                         activation='sigmoid')        self.p2 = MaxPool2D(pool_size=(2, 2), strides=2)        self.flatten = Flatten()        self.f1 = Dense(120, activation='sigmoid')        self.f2 = Dense(84, activation='sigmoid')        self.f3 = Dense(10, activation='softmax')    def call(self,x):        x = self.c1(x)        x = self.p1(x)        x = self.c2(x)        x = self.p2(x)        x = self.flatten(x)        x = self.f1(x)        x = self.f2(x)        y =self.f3(x)        return ymodel = LeNet5()model.compile(optimizer='adam',              loss =tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),              metrics=['sparse_categorical_accuracy'])checkpoint_save_path ="./checkpoint/LeNet5.ckpt"if os.path.exists(checkpoint_save_path +'.index'):    print('·····加载模型·····')    model.load_weights(checkpoint_save_path)cp_callback = tf.keras.callbacks.ModelCheckpoint(    filepath= checkpoint_save_path,    save_best_only=True,    save_weights_only=True)history = model.fit(x_train,y_train,batch_size=32,epochs=5,                    validation_data=(x_test,y_test),validation_freq=1,                    callbacks=[cp_callback])model.summary()# print(model.trainable_variables)file = open('./weights.txt', 'w')for v in model.trainable_variables:    file.write(str(v.name) + '\n')    file.write(str(v.shape) + '\n')    file.write(str(v.numpy()) + '\n')file.close()###############################################    show   ################################################ 显示训练集和验证集的acc和loss曲线acc = history.history['sparse_categorical_accuracy']val_acc = history.history['val_sparse_categorical_accuracy']loss = history.history['loss']val_loss = history.history['val_loss']plt.subplot(1, 2, 1)plt.plot(acc, label='Training Accuracy')plt.plot(val_acc, label='Validation Accuracy')plt.title('Training and Validation Accuracy')plt.legend()plt.subplot(1, 2, 2)plt.plot(loss, label='Training Loss')plt.plot(val_loss, label='Validation Loss')plt.title('Training and Validation Loss')plt.legend()plt.show()

运行的打印效果:

bc3424175ff7dd4745e2832e578a118f.png

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值