【机器学习】windows GPU版keras神经网络库编译

版权声明:本文为博主原创文章,遵循 CC 4.0 by-sa 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/lpsl1882/article/details/52005400

theano、caffe和tensorflow是python上主流的机器学习库。keras是对theano/tensorflow的高级封装,由于tensorflow不支持windows,因此windows上keras只能使用theano作为底层。建议在windows上安装anaconda作为主要python。
安装keras很简单,依次输入:
conda install mingw libpython
pip install keras
如果没有安装theano,pip会自动下载安装theano。如果不用anaconda安装mingw,那么import keras便会提示:找不到g++编译器,无法执行C语言级别的优化,只能使用python层面的计算。这样便无法使用GPU加速。
安装好之后,进入C:/Users/你的用户名/,新建或者编辑.theanorc.txt文件,输入:
[global]
device = gpu
floatX=float32
allow_input_downcast=True#要不要好像都能运行
[nvcc]
flags = -LF:/Miniconda2/Lib #anaconda路径
compiler_bindir = D:/Program Files (x86)/Microsoft Visual Studio 12.0/VC/bin#VS2013路径
fastmath = True
这里我的cuda等库选择默认安装,anaconda已经安装配置了mingw且自带mkl加速库,这样theano配置文件就不需要添加各种配置。注意theano只支持VS2010以上版本,我用的是VS2013。
现在直接打开命令行,输入例子(来源:http://bonerkiller.coding.me/2016/04/09/2016-04-09-Deepin%20CUDA%E5%AE%89%E8%A3%85%E5%8F%8AKeras%E4%BD%BF%E7%94%A8GPU%E6%A8%A1%E5%BC%8F%E8%BF%90%E8%A1%8C/

from __future__ import print_function
import numpy as np
np.random.seed(1337)  # for reproducibility

from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.embeddings import Embedding
from keras.layers.convolutional import Convolution1D, MaxPooling1D
from keras.datasets import imdb


# set parameters:
max_features = 5000
maxlen = 100
batch_size = 32
embedding_dims = 100
nb_filter = 250
filter_length = 3
hidden_dims = 250
nb_epoch = 2

print('Loading data...')
(X_train, y_train), (X_test, y_test) = imdb.load_data(nb_words=max_features,
                                                      test_split=0.2)
print(len(X_train), 'train sequences')
print(len(X_test), 'test sequences')

print('Pad sequences (samples x time)')
X_train = sequence.pad_sequences(X_train, maxlen=maxlen)
X_test = sequence.pad_sequences(X_test, maxlen=maxlen)
print('X_train shape:', X_train.shape)
print('X_test shape:', X_test.shape)

print('Build model...')
model = Sequential()

# we start off with an efficient embedding layer which maps
# our vocab indices into embedding_dims dimensions
model.add(Embedding(max_features, embedding_dims, input_length=maxlen))
model.add(Dropout(0.25))

# we add a Convolution1D, which will learn nb_filter
# word group filters of size filter_length:
model.add(Convolution1D(nb_filter=nb_filter,
                        filter_length=filter_length,
                        border_mode='valid',
                        activation='relu',
                        subsample_length=1))
# we use standard max pooling (halving the output of the previous layer):
model.add(MaxPooling1D(pool_length=2))

# We flatten the output of the conv layer,
# so that we can add a vanilla dense layer:
model.add(Flatten())

# We add a vanilla hidden layer:
model.add(Dense(hidden_dims))
model.add(Dropout(0.25))
model.add(Activation('relu'))

# We project onto a single unit output layer, and squash it with a sigmoid:
model.add(Dense(1))
model.add(Activation('sigmoid'))

model.compile(loss='binary_crossentropy',
              optimizer='rmsprop')
model.fit(X_train, y_train, batch_size=batch_size,
          nb_epoch=nb_epoch, show_accuracy=True,
          validation_data=(X_test, y_test))

运行过程中cuda的nvcc程序会不断创建库,会等一段时间,这是正常的。各种临时文件都放在C:/Users/你的用户名/AppData/Local/Theano和C:/Users/你的用户名/.keras 下面。

展开阅读全文

没有更多推荐了,返回首页