测试theano是提示使用CPU而不是gpu问题

测试theano是提示使用CPU而不是gpu问题:

cuda 和theano配置完成后,进行theano测试 

为了检查你的GPU是否启用了,可以剪切下面的代码然后保存成一个Python文件(我命名为test_gpu1.py),运行看看。

[python] view plain copy
  1. from theano import function, config, shared, sandbox  
  2. import theano.tensor as T  
  3. import numpy  
  4. import time  
  5.   
  6. vlen = 10 * 30 * 768  # 10 x #cores x # threads per core  
  7. iters = 1000  
  8.   
  9. rng = numpy.random.RandomState(22)  
  10. x = shared(numpy.asarray(rng.rand(vlen), config.floatX))  
  11. f = function([], T.exp(x))  
  12. print f.maker.fgraph.toposort()  
  13. t0 = time.time()  
  14. for i in xrange(iters):  
  15.     r = f()  
  16. t1 = time.time()  
  17. print 'Looping %d times took' % iters, t1 - t0, 'seconds'  
  18. print 'Result is', r  
  19. if numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]):  
  20.     print 'Used the cpu'  
  21. else:  
  22.     print 'Used the gpu'  

直接在终端命令行敲:python test_gpu1.py 

运行结束后提示显示的是Used the cpu!!(后来经过查找资料这种情况其实是由于theano的默认配置中不是使用GPU而是CPU)

反复在根目录下添加环境变量也没能解决。最后解决方法是在命令行上指定模式  并运行test_gpu1.py:

THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python test_gpu1.py


同理如果指定用CPU的话就在终端上敲:

THEANO_FLAGS=mode=FAST_RUN,device=cpu,floatX=float32 python test_gpu1.py


在theano 的官网教程中,配置环境变量有两种方法:

第一种方法THEANO_FLAGS方法(即上面使用的)

第二种方法是在根目录(home/hf(用户名为hf))下建立.theanorc文件并添加例如类似下面:

[global]

floatX=float32

device=gpu0

[lib]

cnmem=1

但我用第二种方法没能成功,第一种方法确实可行!

****第二种方法没能成功 的原因是:当时我在控制台用的是root身份,而.theanorc 所创建的根目录为home/hf(这是hf用户的根目录!!)所以我在root的根目录(cd $HOME)下创建.theano 就好了!!!

发布了184 篇原创文章 · 获赞 216 · 访问量 80万+
展开阅读全文

theano或Python环境的问题,运行train.py出现一些问题,求大神指导

05-18

wangj@liutl:~/Work/NSC-master/NSC/src$ THEANO_FLAGS="floatX=float32,device=gpu" python train.py IMDB 10 Using gpu device 0: GeForce GTX TITAN Black (CNMeM is disabled, cuDNN Version is too old. Update to v5, was 4004.) data loaded. Traceback (most recent call last): File "train.py", line 15, in <module> model = LSTMModel(voc.size,trainset, devset, dataname, classes, None) File "/home/wangj/Work/NSC-master/NSC/src/LSTMModel.py", line 61, in __init__ updates=updates, File "/usr/local/lib/python2.7/dist-packages/theano/compile/function.py", line 320, in function output_keys=output_keys) File "/usr/local/lib/python2.7/dist-packages/theano/compile/pfunc.py", line 479, in pfunc output_keys=output_keys) File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 1777, in orig_function defaults) File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 1641, in create input_storage=input_storage_lists, storage_map=storage_map) File "/usr/local/lib/python2.7/dist-packages/theano/gof/link.py", line 690, in make_thunk storage_map=storage_map)[:3] File "/usr/local/lib/python2.7/dist-packages/theano/gof/vm.py", line 1003, in make_all no_recycling)) File "/usr/local/lib/python2.7/dist-packages/theano/scan_module/scan_op.py", line 913, in make_thunk from . import scan_perform_ext File "/usr/local/lib/python2.7/dist-packages/theano/scan_module/scan_perform_ext.py", line 141, in <module> from scan_perform.scan_perform import * File "__init__.pxd", line 155, in init theano.scan_module.scan_perform (/home/wangj/.theano/compiledir_Linux-3.13--generic-x86_64-with-Ubuntu-14.04-trusty-x86_64-2.7.6-64/scan_perform/mod.cpp:9984) ValueError: ('The following error happened while compiling the node', forall_inplace,gpu,scan_fn}(Elemwise{minimum,no_inplace}.0, GpuDimShuffle{0,1,x}.0, GpuSubtensor{int64:int64:int8}.0, GpuIncSubtensor{InplaceSet;:int64:}.0, GpuIncSubtensor{InplaceSet;:int64:}.0, Wf1, Wf2, Wc1, Wc2, Wi1, Wi2, Wo1, Wo2, GpuDimShuffle{x,0}.0, GpuDimShuffle{x,0}.0, GpuDimShuffle{x,0}.0, GpuDimShuffle{x,0}.0), '\n', 'numpy.dtype has the wrong size, try recompiling') 问答

ubuntu下Theano使用GPU问题

09-22

配置情况: 在ubuntu14.04下配置theano,在调用官方文档里GPU测试得函数是总是返回 Used CPU CUDA应该安装的没问题,安装完可以运行测试用例 显卡也是没问题得,支持CUDA妥妥儿得 用的IDE是spyder 使用另一个测试用例 from theano import function, config, shared, sandbox import theano.sandbox.cuda.basic_ops import theano.tensor as T import numpy import time vlen = 10 * 30 * 768 # 10 x #cores x # threads per core iters = 1000 rng = numpy.random.RandomState(22) x = shared(numpy.asarray(rng.rand(vlen), 'float32')) f = function([], sandbox.cuda.basic_ops.gpu_from_host(T.exp(x))) print(f.maker.fgraph.toposort()) t0 = time.time() for i in xrange(iters): r = f() t1 = time.time() print("Looping %d times took %f seconds" % (iters, t1 - t0)) print("Result is %s" % (r,)) print("Numpy result is %s" % (numpy.asarray(r),)) if numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]): print('Used the cpu') else: print('Used the gpu') 的时候偶尔会返回 找不到nvcc的错误,提示我应该将nvcc加入路径中,但有时候重启一下就不提示这问题了。。虽然仍然显示调用得是cpu 出现这个问题时候我试着查看 nvcc -V -i 提示nvcc未安装,可使用apt-get安装 然后我用apt-get时又会这样: Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting 'cuda-core-7-5' instead of 'nvcc' cuda-core-7-5 is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 320 not upgraded. 刚才说了我CUDA都安装好了nvcc肯定妥妥儿的啊,尝试按照CUDA手册上把运行库再添加到路径中一次: gpu2@gpu2-All-Series:~$ export PATH=/usr/local/cuda-7.5/bin:$PATH gpu2@gpu2-All-Series:~$ export LD_LIBRARY_PATH=/usr/local/cuda-7.5/lib64:$LD_LIBRARY_PATH 并没有任何卵用 求各位大神协助!感激不尽! 问答

没有更多推荐了,返回首页

©️2019 CSDN 皮肤主题: 大白 设计师: CSDN官方博客

分享到微信朋友圈

×

扫一扫,手机浏览