工欲善其事必先有其器
我们先还是老老实实做调包侠吧
1、安装必要的包:
Pkg.add("Statistics")
Pkg.add("Flux")
2、先上代码:
using Statistics using Flux: onehotbatch, onecold, crossentropy, throttle using Base.Iterators: repeated imgs = Flux.Data.MNIST.images() labels = Flux.Data.MNIST.labels();
#以上代码就是调用包,加载imgs和labels
imgs[27454]
labels[27454] # 7
X = hcat(float.(reshape.(imgs, :))...) #MNIST图片是28 * 28 (长, 宽)二维,变成成一维 Y = onehotbatch(labels, 0:9) #对Y进行编码
model = Chain( Dense(28^2, 32, relu), Dense(32, 10), softmax) #神经网络一共三层:第一层,28^2 -> 32;第二层,32 - #>10; 第三层,softmax
#以下是优化过程
loss(x, y) = crossentropy(m(x), y) opt = ADAM(); accuracy(x, y) = mean(onecold(m(x)) .== onecold(y)) dataset = repeated((X,Y),200) evalcb = () -> @show(loss(X, Y))
#可以开始训练了
Flux.train!(loss, params(m), dataset, opt, cb = throttle(evalcb, 10));
#训练结果如下:
loss(X, Y) = 2.3259583f0 (tracked) loss(X, Y) = 1.6830894f0 (tracked) loss(X, Y) = 1.1227762f0 (tracked) loss(X, Y) = 0.7927527f0 (tracked) loss(X, Y) = 0.6152953f0 (tracked) loss(X, Y) = 0.51356655f0 (tracked) loss(X, Y) = 0.44959342f0 (tracked) loss(X, Y) = 0.4059622f0 (tracked) loss(X, Y) = 0.3741082f0 (tracked) loss(X, Y) = 0.3512681f0 (tracked) loss(X, Y) = 0.33128205f0 (tracked) loss(X, Y) = 0.31474704f0 (tracked) loss(X, Y) = 0.3016968f0 (tracked) loss(X, Y) = 0.28936785f0 (tracked) loss(X, Y) = 0.27849576f0 (tracked) loss(X, Y) = 0.2688136f0 (tracked) ......
accuracy(X, Y) #训练集上的结果 #0.92735 test_X = hcat(float.(reshape.(Flux.Data.MNIST.images(:test), :))...) test_Y = onehotbatch(Flux.Data.MNIST.labels(:test), 0:9); model(test_X[:,5287]) accuracy(test_X, test_Y) #测试集上的结果 #0.924
代码比较简单,所用到的mlp也很简单,loss用的是交叉熵,梯度用的是ADAM。
第二篇我们还是用这些扩展一下,总不能一老做调包侠吧