在调试自己编写的小网络时出现了以下问题
I1221 14:40:57.487751 15003 sgd_solver.cpp:106] Iteration 0, lr = 0.001
F1221 14:40:57.493633 15003 syncedmem.cpp:56] Check failed: error == cudaSuccess (2 vs. 0) out of memory
*** Check failure stack trace: ***
@ 0x7f2f64ed0daa (unknown)
@ 0x7f2f64ed0ce4 (unknown)
@ 0x7f2f64ed06e6 (unknown)
@ 0x7f2f64ed3687 (unknown)
@ 0x7f2f655da931 caffe::SyncedMemory::to_gpu()
@ 0x7f2f655d9c99 caffe::SyncedMemory::mutable_gpu_data()
@ 0x7f2f654d4462 caffe::Blob<>::mutable_gpu_data()
@ 0x7f2f6561e26c caffe::SGDSolver<>::ComputeUpdateValue()
@ 0x7f2f6561ec63 caffe::SGDSolver<>::ApplyUpdate()
@ 0x7f2f654cf68c caffe::Solver<>::Step()
@ 0x7f2f654cfe99 caffe::Solver<>::Solve()
@ 0x408b0b train()
@ 0x405e6c main
@ 0x7f2f6372bf45 (unknown)
@ 0x406773 (unknown)
@ (nil) (unknown)
首先我使用如下命令查看显存
nvidia-smi
发现没有多余的程序。
后来又上网找了找,找到了如下解决方案:
The error you get is indeed out of memory, but it's not the RAM, but rather GPU memory (note the the error comes from CUDA).
Usually, when caffe is out of memory - the first thing to do is reduce the batch size (at the cost of gradient accuracy), but since you are already at batch size = 1...
Are you sure batch size is 1 for both TRAIN and TEST phases?
大概意思就是batch_size太大了,一次性读入的图片太多了,所以就超出了显存。因此需要将train.prototxt中的文件train和test的batch_size调小一点。
终于解决。nice