theano java_Windows Theano GPU 版配置

因为自己在上Coursera的Advanced Machine Learning, 里面第四周的Assignment要用到PYMC3,然后这个似乎是基于theano后端的。然而CPU版TMD太慢了,跑个马尔科夫蒙特卡洛要10个小时,简直不能忍了。所以妥妥换gpu版。

为了不把环境搞坏,我在Anaconda里面新建了一个环境。(关于Anaconda,可以看我之前翻译的文章)

Conda Create -n theano-gpu python=3.4

(theano GPU版貌似不支持最新版,保险起见装了旧版)

conda install theano pygpu

这里面会涉及很多依赖,应该conda会给你搞好,缺什么的话自己按官方文档去装。

然后至于Cuda和Cudnn的安装,可以看我写的关于TF安装的教程

和TF不同的是,Theano不分gpu和cpu版,用哪个看配置文件设置,这一点是翻博客了解到的:

配置好Theano环境之后,只要 C:Users你的用户名 的路径下添加 .theanorc.txt 文件。

.theanorc.txt 文件内容:

[global]

openmp=False

device = cuda

floatX = float32

base_compiler = C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin

allow_input_downcast=True

[lib]

cnmem = 0.75

[blas]

ldflags=

[gcc]

cxxflags=-IC:\Users\lyh\Anaconda2\MinGW

[nvcc]

fastmath = True

flags = -LC:\Users\lyh\Anaconda2\libs

compiler_bindir = C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin

flags = -arch=sm_30

注意在新版本中,声明用gpu从device=gpu改为device=cuda

然后测试是否成功:

from theano import function, config, shared, tensor

import numpy

import time

vlen = 10 * 30 * 768 # 10 x #cores x # threads per core

iters = 1000

rng = numpy.random.RandomState(22)

x = shared(numpy.asarray(rng.rand(vlen), config.floatX))

f = function([], tensor.exp(x))

print(f.maker.fgraph.toposort())

t0 = time.time()

for i in range(iters):

r = f()

t1 = time.time()

print("Looping %d times took %f seconds" % (iters, t1 - t0))

print("Result is %s" % (r,))

if numpy.any([isinstance(x.op, tensor.Elemwise) and

('Gpu' not in type(x.op).__name__)

for x in f.maker.fgraph.toposort()]):

print('Used the cpu')

else:

print('Used the gpu')

输出:

[GpuElemwise{exp,no_inplace}((float32, vector)>), HostFromGpu(gpuarray)(GpuElemwise{exp,no_inplace}.0)]

Looping 1000 times took 0.377000 seconds

Result is [ 1.23178029 1.61879349 1.52278066 ..., 2.20771813 2.29967761

1.62323296]

Used the gpu

到这里就算配好了

然后在作业里面,显示Quadro卡启用

30ca3f0368f62058347a3f684b86b81d.png

但是还是有个warning

WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions.

这个真不知道怎么处理

然后后面运行到:

with pm.Model() as logistic_model:

# Since it is unlikely that the dependency between the age and salary is linear, we will include age squared

# into features so that we can model dependency that favors certain ages.

# Train Bayesian logistic regression model on the following features: sex, age, age^2, educ, hours

# Use pm.sample to run MCMC to train this model.

# To specify the particular sampler method (Metropolis-Hastings) to pm.sample,

# use `pm.Metropolis`.

# Train your model for 400 samples.

# Save the output of pm.sample to a variable: this is the trace of the sampling procedure and will be used

# to estimate the statistics of the posterior distribution.

#### YOUR CODE HERE ####

pm.glm.GLM.from_formula('income_more_50K ~ sex+age + age_square + educ + hours', data, family=pm.glm.families.Binomial())

with logistic_model:

trace = pm.sample(400, step=[pm.Metropolis()]) #nchains=1 works for gpu model

### END OF YOUR CODE ###

这里出现的报错:

GpuArrayException: cuMemcpyDtoHAsync(dst, src->ptr + srcoff, sz, ctx->mem_s): CUDA_ERROR_INVALID_VALUE: invalid argument

这个问题最后github大神解决了:

So njobs will spawn multiple chains to run in parallel. If the model uses the GPU there will be a conflict. We recently added nchains where you can still run multiple chains. So I think running pm.sample(niter, nchains=4, njobs=1) should give you what you want.

我把:

trace = pm.sample(400, step=[pm.Metropolis()]) #nchains=1 works for gpu model

加上nchains就好了,应该是并行方面的问题

trace = pm.sample(400, step=[pm.Metropolis()],nchains=1, njobs=1) #nchains=1 works for gpu model

另外

plot_traces(trace, burnin=200)

出现pm.df_summary报错,把pm.df_summary 换成 pm.summary就好了,也是github搜出来的。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值