👨🎓个人主页:研学社的博客
💥💥💞💞欢迎来到本博客❤️❤️💥💥
🏆博主优势:🌞🌞🌞博客内容尽量做到思维缜密,逻辑清晰,为了方便读者。
⛳️座右铭:行百里者,半于九十。
📋📋📋本文目录如下:🎁🎁🎁
目录
💥1 概述
这是一个在 MATLAB 中实现的寄存器传输级 (RTL) 硬件模拟器,用于模拟深度置信网络 (DBN) 的专用硬件系统。在机器学习中,DBN是一种生成图形模型,或者是一种深度神经网络,由多层受限玻尔兹曼机组成。
该硬件系统基于我们所说的神经元机(NM)硬件架构,该架构可以专门用于神经网络系统。通过这个模拟,以及其他模型,我们试图解释我们的系统如何能够实现比CPU和GPU高几个数量级的性能资源比,这意味着更小的芯片面积和功耗,以及使用相同的资源更好的计算速度。尽管源代码是用顺序语言编码的,但它是周期精确的,并以与硬件相同的方式模拟各种 RTL 组件,如寄存器、算术运算符和存储器。读者可以即时执行模拟器并监控其工作情况。
📚2 运行结果
部分代码:
clear all;clf;
% read an image in MATLAB (therefore no train files required)
I = imadjust(imresize(rgb2gray(imread('blason.jpg')),1.15),[0 0.8],[]);
% extract characters from the image
I = 1 - (double(I) / 255); % inverse image
I3 = I(51:78, 71:98+28*7); % character area
for i=0:7 % extract images for 8 characters (AZERGUES)
DB(i+1,:,1) = reshape(I3(:,(1:28)+28*i-i+(i>4)*4),1,[]);
end;
% uncommenting the following line turns this code for NMIST training.
% it requires additional files from Hinton's site [5].
% converter; makebatches; DB = batchdata; % uncomment this line for MNIST
% set network parameters
Nn = [784 500 500 2000]; % define network size
[Nc Nn(1) Nb] = size(DB); % batch size, pixels, and number of batches
ew = 0.1; eb = 0.1; % learning rates for weight and bias
% define ob-chip block memories
MW = zeros(60000,64); % weights
MD = zeros(10000,64); % weight differences
MM = MW; % network topology information (forward)
MR = MW; % network topology information (reverse)
MN = MW + 1; % null connection indicator
MX = zeros(10000,64); % MX
MB = zeros(1,20000); % bias
MA = MB; % bias differences
% initialize SOT and memories
SOT = zeros(11,12);
moff = 0; woff = 0; roff = 0; % memory offset variables
SOT(1,[1,5]) = [1,Nn(1)]; % set for first stage (training data input)
oi = 2; % starting SOT index
for l=1:3 % for each RBM
hb = ceil(Nn(l) / 64); % bpn for hidden layer
hNb = hb * Nn(l+1); % Nb for hisdden layser
vb = ceil(Nn(l+1) / 64); % bpn for visible layer
vNb = vb * Nn(l); % Nb for visible layer
for j = 0:Nn(l+1)-1 % for each visible-hidden neuron pair
for i=0:Nn(l)-1
hinx = mod(i+j,hb*64); % position in the connection space (forward)
hbi = floor(hinx/64); % SB index for current neuron (forward)
snu_i = mod(hinx,64); % SNU number
ma = j * hb + hbi + moff + 1; % MM address
wa = j * hb + hbi + woff + 1; % MW address
MW(wa,snu_i+1) = 0.1 * randn(); % set weight
MM(ma,snu_i+1) = i; % set neuron pointer
MN(ma,snu_i+1) = 0; % clear null
vinx = mod(i+j,vb*64); % position in the connection space (reverse)
vbi = floor(vinx/64); % SB index for current neuron (reverse)
ma = i * vb + vbi + hNb + moff + 1; % MM address
ra = i * vb + vbi + roff + 1; % MR address
MR(ra,snu_i+1) = j * hb + hbi; % set MR forward weight pointer
MM(ma,snu_i+1) = j; % set neuron pointer
MN(ma,snu_i+1) = 0; % clear null
end;
end;
k2 = max(Nn);
k4 = max(Nn) * 2;
of1 = mod(l,2) * k2; % MX read offset 2
of2 = mod(l+1,2) * k2; % MX read offset 1
bo = (l - 1) * k4; % MBias offset
% fields in SOT record
% --------------------
% 1: stage type
% 2: N
% 3: bpn
% 4: Nb
% 5: input (0 = no input)
% 6: MM offset
% 7: MX read offset
% 8: MX wrire offset
% 9: MR offset
%10: MW offset
%11: MBias offset
%12: MA offset
% --------------------
SOT(oi+0,:) = [2,Nn(l+1),hb,hNb,0, moff,of2,of1, 0,woff, bo+k2,l*k2];
SOT(oi+1,:) = [3, Nn(l),vb,vNb,0,moff+hNb,of1, k4,roff,woff,bo,(l-1)*k2];
SOT(oi+2,:) = [4,Nn(l+1),hb,hNb,0, moff, k4, 0, 0,woff, bo+k2,l*k2];
oi = oi + 3;
moff = moff + hNb + vNb; % adjust offsets
woff = woff + hNb;
roff = roff + vNb;
end
SOT(oi-1,5) = Nn(1); % training data input
SOT(oi,1:2) = [9,2]; % set for last stage (go to)
🎉3 参考文献
部分理论来源于网络,如有侵权请联系删除。
[1]Byungik Ahn (2023). Hardware simulator for a deep belief netrowk system .