2024智能计算系统实验2_2
关于网络层构建的报错
构建三层网络模型后进行测试出现了以下错误:
Loading parameters from file stu_upload/weight.npy
loading params for layer fc1 ...
loading params for layer fc2 ...
loading params for layer fc3 ...
[2024-3-18 15:10:37] [CNNL] [Warning]:[cnnlBatchMatMul] is deprecated and will be removed in the future release, please use [cnnlBatchMatMulBCast] instead.
Segmentation fault (core dumped)
Segmentation fault (core dumped)一般是由于越界产生
当时的网络层结构如下
# TODO:使用 pycnml 建立三层神经网络结构
self.net.setInputShape(batch_size, input_size, 1, 1) #设置输入参数
# fc1
input_shapem1=pycnnl.IntVector(4)
input_shapem1[0]=batch_size
input_shapem1[1]=1
input_shapem1[2]=1
input_shapem1[3]=input_size
weight_shapem1=pycnnl.IntVector(4)
weight_shapem1[0]=batch_size
weight_shapem1[1]=1
weight_shapem1[2]=input_size
weight_shapem1[3]=hidden1
output_shapem1=pycnnl.IntVector(4)
output_shapem1[0]=batch_size
output_shapem1[1]=1
output_shapem1[2]=1
output_shapem1[3]=hidden1
self.net.createMlpLayer('fc1', input_shapem1, weight_shapem1, output_shapem1)
#relu1
self.net.createReLuLayer('relu1')
#fc2
input_shapem2=pycnnl.IntVector(4)
input_shapem2[0]=batch_size
input_shapem2[1]=1
input_shapem2[2]=1
input_shapem2[3]=hidden1
weight_shapem2=pycnnl.IntVector(4)
weight_shapem2[0]=batch_size
weight_shapem2[1]=1
weight_shapem2[2]=hidden1
weight_shapem2[3]=hidden2
output_shapem2=pycnnl.IntVector(4)
output_shapem2[0]=batch_size
output_shapem2[1]=1
output_shapem2[2]=1
output_shapem2[3]=hidden2
self.net.createMlpLayer('fc2', input_shapem2, weight_shapem2, output_shapem2)
#relu2
self.net.createReLuLayer('relu2')
#fc3
input_shapem3=pycnnl.IntVector(4)
input_shapem3[0]=batch_size
input_shapem3[1]=1
input_shapem3[2]=1
input_shapem3[3]=hidden2
weight_shapem3=pycnnl.IntVector(4)
weight_shapem3[0]=batch_size
weight_shapem3[1]=1
weight_shapem3[2]=hidden2
weight_shapem3[3]=out_classes
output_shapem3=pycnnl.IntVector(4)
output_shapem3[0]=batch_size
output_shapem3[1]=1
output_shapem3[2]=1
output_shapem3[3]=out_classes
self.net.createMlpLayer('fc3', input_shapem3, weight_shapem3, output_shapem3)
#softmax
self.net.createSoftmaxLayer('softmax', input_shapem3, axis=1)
经过修改隐藏层参数、batch_size、迭代次数后发现不是这些原因引起,
通过使用print定位错误位置,发现在调用forword时出现错误,
通过使用gdb工具检查core只能发现错误来源于.py文件,不能看到在第几行出现了错误。
反反复复,跌跌撞撞。
新的发现
耗时许久,突然在实验手册中看到softmax层传入的参数input_shape是一个长度为3的列表,而之前传入的input_shapem3长度为4。
经过修改,softmax层如下定义:
#softmax
input_shapem4=pycnnl.IntVector(3)
input_shapem4[0]=batch_size
input_shapem4[1]=1
input_shapem4[2]=out_classes
self.net.createSoftmaxLayer('softmax', input_shapem4, axis=1)
至此,程序就能跑通了,至于达到100%分数,,,,
(那可能需要在forward函数里加sleep了)
另 macos直接压缩的zip会带有隐藏文件,可以在stu_upload中写一个zip.py进行压缩。
请直接压缩文件,不要压缩文件夹,不然会报找不到文件的错误。