MXNet学习——初识LeNet

 相关理论&故事背景书上都有,不打算复述一遍,这里仅作代码记录&分享

此处非直接可用代码,由于学习过程中多次使用相同函数,故而将一些常用函数整理成了工具包,MxNet学习——自定义工具包

两者结合,方可运行代码

# -------------------------------------------------------------------------------
# Description:  卷积神经网络 LeNet
# Description:  卷积神经网络就是含卷积层的网络
# Description:  LeNet 交替使用卷积层和最大池化层后接全连接层来进行图像分类
# Reference:
# Author: Sophia
# Date:   2021/3/10
# -------------------------------------------------------------------------------
from IPython import display
from mxnet import autograd, nd, init, gluon
from mxnet.gluon import data as gdata, loss as gloss, nn
import random, sys, time, matplotlib.pyplot as plt, mxnet as mx
from plt_so import *

net = nn.Sequential()
net.add(nn.Conv2D(channels=6, kernel_size=5, activation='sigmoid'),
        nn.MaxPool2D(pool_size=2, strides=2),
        nn.Conv2D(channels=16, kernel_size=5, activation='sigmoid'),
        nn.MaxPool2D(pool_size=2, strides=2),
        nn.Dense(120, activation='sigmoid'),
        nn.Dense(84, activation='sigmoid'),
        nn.Dense(10))
X = nd.random.uniform(shape=(1, 1, 28, 28))
net.initialize()
# print(net)

# 输出:
# Sequential(
#   (0): Conv2D(None -> 6, kernel_size=(5, 5), stride=(1, 1), Activation(sigmoid))
#   (1): MaxPool2D(size=(2, 2), stride=(2, 2), padding=(0, 0), ceil_mode=False, global_pool=False, pool_type=max, layout=NCHW)
#   (2): Conv2D(None -> 16, kernel_size=(5, 5), stride=(1, 1), Activation(sigmoid))
#   (3): MaxPool2D(size=(2, 2), stride=(2, 2), padding=(0, 0), ceil_mode=False, global_pool=False, pool_type=max, layout=NCHW)
#   (4): Dense(None -> 120, Activation(sigmoid))
#   (5): Dense(None -> 84, Activation(sigmoid))
#   (6): Dense(None -> 10, linear)
# )

# for layer in net:
#     X = layer(X)
#     print(layer.name, 'output shape:\t', X.shape)

# 输出:
# conv0 output shape:	 (1, 6, 24, 24)
# pool0 output shape:	 (1, 6, 12, 12)
# conv1 output shape:	 (1, 16, 8, 8)
# pool1 output shape:	 (1, 16, 4, 4)
# dense0 output shape:	 (1, 120)
# dense1 output shape:	 (1, 84)
# dense2 output shape:	 (1, 10)

# 读取数据
batch_size = 256
train_iter, test_iter = load_data_fashion_mnist(batch_size)

lr, num_epochs, ctx = 0.9, 5, try_gpu()
net.initialize(force_reinit=True, ctx=ctx, init=init.Xavier())
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': lr})
train_ch5(net, train_iter, test_iter, batch_size, trainer, ctx, num_epochs)

# 输出:
# training on cpu(0)
# epoch 1, loss 2.3200, train acc 0.102, test acc 0.102, time 37.6 sec
# epoch 2, loss 1.6306, train acc 0.371, test acc 0.600, time 36.6 sec
# epoch 3, loss 0.8942, train acc 0.650, test acc 0.718, time 36.9 sec
# epoch 4, loss 0.7137, train acc 0.718, test acc 0.718, time 35.5 sec
# epoch 5, loss 0.6473, train acc 0.744, test acc 0.766, time 35.9 sec

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

我有明珠一颗

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值