tensorflow-gpu 和cpu使用训练ssd模型感想(显卡内存不足解决办法)

版权声明:本文为博主原创文章,遵循 CC 4.0 by-sa 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/qq_38898129/article/details/81356047

   ssd 模型对于GPU ,CPU来说都适用,但是通过我的训练de'd得到的一下经验来说,GPU训练时基本不到1秒每步,而cpu在使用avx2的基础上,训练的每步需要将近1分钟,可想而知GPU训练是cpu训练的60倍左右,这将大大提高大家的训练速度。

  而且,之前我yong用cpu版本训练时ssd model训练1000步需要花7-8个小时,现在用GPU只需要10分钟就足以,由此可见,对于Tensorflow版本的选择,还是以GPU为首选比较好,当然,不进行大的项目的运算,对于神经元数量较少,完成tensorflow教程的简单例子的实现,cpu版本和gpu版本随便就好。

  如果我的GPU显存才2G怎么办?

  笔者的GPU为GTX950M显存大小为2G,此时对于balancap  ssd    model来说,你在训练的时候可能会报错,说你的内存不够,这是由于batch_size  为32,让你的显卡一次装不下这么多的东西,那么减小batch_size的大小直到不报错为止即可!比如我的batch_size为4,此时GPU可以顺利运行!!!

下面上图:

INFO:tensorflow:global step 700: loss = 75.7585 (0.838 sec/step)
INFO:tensorflow:global step 710: loss = 68.1864 (0.847 sec/step)
INFO:tensorflow:global step 720: loss = 141.8700 (0.833 sec/step)
INFO:tensorflow:global step 730: loss = 70.4635 (0.839 sec/step)
INFO:tensorflow:global step 740: loss = 17.7308 (0.844 sec/step)
INFO:tensorflow:global step 750: loss = 20.5915 (0.852 sec/step)
INFO:tensorflow:global step 760: loss = 270.5725 (0.882 sec/step)
INFO:tensorflow:Recording summary at step 761.
INFO:tensorflow:global step 770: loss = 69.5534 (0.843 sec/step)
INFO:tensorflow:global step 780: loss = 27.5434 (0.820 sec/step)
INFO:tensorflow:global step 790: loss = 78.9974 (0.838 sec/step)
INFO:tensorflow:global step 800: loss = 62.6840 (0.824 sec/step)
INFO:tensorflow:global step 810: loss = 40.8120 (0.820 sec/step)
INFO:tensorflow:global step 820: loss = 38.0882 (0.847 sec/step)
INFO:tensorflow:global step 830: loss = 73.9964 (0.850 sec/step)
INFO:tensorflow:Recording summary at step 831.
INFO:tensorflow:global step 840: loss = 24.8218 (0.856 sec/step)
INFO:tensorflow:global step 850: loss = 101.3560 (0.826 sec/step)
INFO:tensorflow:global step 860: loss = 63.4505 (0.847 sec/step)
INFO:tensorflow:global step 870: loss = 31.3807 (0.844 sec/step)
INFO:tensorflow:global step 880: loss = 20.8094 (0.846 sec/step)
INFO:tensorflow:global step 890: loss = 88.3870 (0.863 sec/step)
展开阅读全文

没有更多推荐了,返回首页