💥💥💞💞欢迎来到本博客❤️❤️💥💥
🏆博主优势:🌞🌞🌞博客内容尽量做到思维缜密,逻辑清晰,为了方便读者。
⛳️座右铭:行百里者,半于九十。
📋📋📋本文目录如下:🎁🎁🎁
目录
💥1 概述
FNN网络结构:
(1)FNN可以理解为多层感知机,即:包含多个隐藏层的神经网络
(2)层与层之间是全连接的,即:相邻两层的任意两个节点都有连接
FNN前向传播:
(1)每个神经元以上一层各个节点输出作为输入,通过线性变换(结合偏置)和非线性函数激活,得到这个节点的输出,并传递给下一层的节点
(2)输入层的节点对数据不做任何处理,节点个数等于输入自变量x的维度
(3)输出层的节点个数等于输出因变量y的维度
FNN反向传播:
(1)用神经网络对数据进行建模,就是要找到最合适的参数(权重w和偏置b),对数据进行最佳逼近。通常会设计一个损失函数来度量逼近效果,最优参数应使得损失函数最小化。
(2)神经网络可以视为一个非常复杂的复合函数,求解最优参数时,需要进行链式求导,形成了梯度的反向传播。
(3)常见损失函数:分类问题:交叉熵;回归问题:均方误差
对图像(不同的数据窗口数据)和滤波矩阵(一组固定的权重:因为每个神经元的多个权重固定,所以又可以看做一个恒定的滤波器filter)做内积(逐个元素相乘再求和)的操作就是所谓的『卷积』操作,也是卷积神经网络的名字来源。
📚2 运行结果
部分代码:
%% Train an example ConvNet to achieve very high classification, fast.
dbstop if error;
addpath(genpath('./dlt_cnn_map_dropout_nobiasnn'));
addpath(genpath('./models'));
addpath(genpath('./util'));
rng('default')
%% Load data and network model
% dataset = 'fashion_mnist';
dataset = 'mnist';
if strcmp(dataset, 'fashion_mnist') == 1
train_images = loadMNISTImages('train-images-idx3-ubyte');
train_labels = loadMNISTLabels('train-labels-idx1-ubyte');
test_images = loadMNISTImages('t10k-images-idx3-ubyte');
test_labels = loadMNISTLabels('t10k-labels-idx1-ubyte');
train_y = zeros(length(train_labels), 10);
for i = 1 : length(train_labels)
train_y(i, train_labels(i)+1) = 1;
end
test_y = zeros(length(test_labels), 10);
for i = 1 : length(test_labels)
test_y(i, test_labels(i)+1) = 1;
end
train_x = double(reshape(train_images,28,28,60000));
test_x = double(reshape(test_images,28,28,10000));
train_y = double(train_y');
test_y = double(test_y');
% Load a trained ANN model
load cnn_fashion_mnist_91.35.mat;
elseif strcmp(dataset, 'mnist') == 1
load mnist_uint8;
train_x = double(reshape(train_x',28,28,60000)) / 255;
test_x = double(reshape(test_x',28,28,10000)) / 255;
train_y = double(train_y');
test_y = double(test_y');
% Load a trained ANN model
load cnn_mnist_99.14.mat;
end
test_x = test_x(:, :, 1:2000);
test_y = test_y(:, 1:2000);
%% Spike-based Testing of a ConvNet
lifsim_opts = struct;
lifsim_opts.t_ref = 0.000;
lifsim_opts.threshold = 1.0;
lifsim_opts.rest = 0.0;
lifsim_opts.dt = 0.001;
lifsim_opts.duration = 0.100;
lifsim_opts.report_every = 0.001;
lifsim_opts.max_rate = 200;
%% Test the original SCNN
scnn = lifsim_scnn(cnn, test_x, test_y, lifsim_opts);
%% Test the Evolutionary Rule
evol_ops.beta = 0.6;
evol_ops.initial_E = 1;
evol_ops.learning_rate = 0.01;
evol_scnn = lifsim_evol_scnn(cnn, test_x, test_y, lifsim_opts, evol_ops);
adap_evol_scnn = lifsim_adap_evol_scnn(cnn, test_x, test_y, lifsim_opts, evol_ops);
% plot_cnn_spikes(scnn);
% plot_cnn_spikes(evol_scnn);
% plot_cnn_spikes(adap_evol_scnn);
%% Show the difference
figure; clf;
time = lifsim_opts.dt:lifsim_opts.dt:lifsim_opts.duration;
plot(time * 1000, scnn.performance);
hold on; grid on;
plot(time * 1000, evol_scnn.performance);
hold on; grid on;
plot(time * 1000, adap_evol_scnn.performance);
legend('SCNN', 'Evol-SCNN', 'Adap-Evol-SCNN');
ylim([00 100]);
xlabel('Time [ms]');
ylabel('Accuracy [%]');
🎉3 参考文献
部分理论来源于网络,如有侵权请联系删除。
[1]邹洪建,栾茹.基于FNN的SRM调节器参数自整定的研究[J].微电机,2022,55(11):75-81.DOI:10.15934/j.cnki.micromotors.2022.11.008.
[2]乔楠.基于SSA-FNN的光伏功率超短期预测研究[J].光源与照明,2022(10):98-100.
[3]王韬. 基于卷积神经网络的超短脉冲测量[D].山东大学,2022.DOI:10.27272/d.cnki.gshdu.2022.002930.