keras多显卡训练

完整实例

'''Trains a simple deep NN on the MNIST dataset.
Gets to 98.40% test accuracy after 20 epochs
(there is *a lot* of margin for parameter tuning).
2 seconds per epoch on a K520 GPU.
'''
 
from __future__ import print_function
 
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.optimizers import RMSprop
from keras.utils import multi_gpu_model
 
batch_size = 128
num_classes = 10
epochs = 20
 
# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
 
x_train = x_train.reshape(60000, 784)
x_test = x_test.reshape(10000, 784)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
 
 
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
 
 
model = Sequential()
model.add(Dense(512, activation='relu', input_shape=(784,)))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))
 
 
parallel_model = multi_gpu_model(model, 2)
parallel_model.summary()
 
 
parallel_model.compile(loss='categorical_crossentropy',
              optimizer=RMSprop(),
              metrics=['accuracy'])
 
 
history = parallel_model.fit(x_train, y_train,
                    batch_size=batch_size,
                    epochs=epochs,
                    verbose=1,
                    validation_data=(x_test, y_test))
score = parallel_model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])

model.save_weights('/path/to/save/model.h5')

最终运行的结果如下:

root@deeplearning:/opt/soft/keras/examples# python mnist_mlp_multi_gpu.py 
Using TensorFlow backend.
60000 train samples
10000 test samples
2018-02-17 18:14:41.147652: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2018-02-17 18:14:41.304376: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-02-17 18:14:41.304757: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties: 
name: Tesla P4 major: 6 minor: 1 memoryClockRate(GHz): 1.1135
pciBusID: 0000:00:07.0
totalMemory: 7.43GiB freeMemory: 7.32GiB
2018-02-17 18:14:41.399113: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-02-17 18:14:41.399487: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 1 with properties: 
name: Tesla P4 major: 6 minor: 1 memoryClockRate(GHz): 1.1135
pciBusID: 0000:00:08.0
totalMemory: 7.43GiB freeMemory: 7.32GiB
2018-02-17 18:14:41.399579: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Device peer to peer matrix
2018-02-17 18:14:41.399613: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1051] DMA: 0 1 
2018-02-17 18:14:41.399627: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1061] 0:   Y N 
2018-02-17 18:14:41.399632: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1061] 1:   N Y 
2018-02-17 18:14:41.399650: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: Tesla P4, pci bus id: 0000:00:07.0, compute capability: 6.1)
2018-02-17 18:14:41.399663: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:1) -> (device: 1, name: Tesla P4, pci bus id: 0000:00:08.0, compute capability: 6.1)
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
dense_1_input (InputLayer)      (None, 784)          0                                            
__________________________________________________________________________________________________
lambda_1 (Lambda)               (None, 784)          0           dense_1_input[0][0]              
__________________________________________________________________________________________________
lambda_2 (Lambda)               (None, 784)          0           dense_1_input[0][0]              
__________________________________________________________________________________________________
sequential_1 (Sequential)       (None, 10)           669706      lambda_1[0][0]                   
                                                                 lambda_2[0][0]                   
__________________________________________________________________________________________________
dense_3 (Concatenate)           (None, 10)           0           sequential_1[1][0]               
                                                                 sequential_1[2][0]               
==================================================================================================
Total params: 669,706
Trainable params: 669,706
Non-trainable params: 0
__________________________________________________________________________________________________
Train on 60000 samples, validate on 10000 samples
Epoch 1/20
60000/60000 [==============================] - 4s 61us/step - loss: 0.2460 - acc: 0.9234 - val_loss: 0.1110 - val_acc: 0.9647
Epoch 2/20
60000/60000 [==============================] - 3s 49us/step - loss: 0.1018 - acc: 0.9693 - val_loss: 0.0804 - val_acc: 0.9766
Epoch 3/20
60000/60000 [==============================] - 3s 49us/step - loss: 0.0742 - acc: 0.9776 - val_loss: 0.0745 - val_acc: 0.9771
Epoch 4/20
60000/60000 [==============================] - 3s 49us/step - loss: 0.0599 - acc: 0.9820 - val_loss: 0.0698 - val_acc: 0.9793
Epoch 5/20
60000/60000 [==============================] - 3s 49us/step - loss: 0.0514 - acc: 0.9844 - val_loss: 0.0761 - val_acc: 0.9793
Epoch 6/20
60000/60000 [==============================] - 3s 49us/step - loss: 0.0425 - acc: 0.9872 - val_loss: 0.0775 - val_acc: 0.9800
Epoch 7/20
60000/60000 [==============================] - 3s 49us/step - loss: 0.0390 - acc: 0.9886 - val_loss: 0.0840 - val_acc: 0.9813
Epoch 8/20
60000/60000 [==============================] - 3s 49us/step - loss: 0.0334 - acc: 0.9903 - val_loss: 0.0890 - val_acc: 0.9805
Epoch 9/20
60000/60000 [==============================] - 3s 49us/step - loss: 0.0310 - acc: 0.9904 - val_loss: 0.0909 - val_acc: 0.9818
Epoch 10/20
60000/60000 [==============================] - 3s 49us/step - loss: 0.0277 - acc: 0.9920 - val_loss: 0.0877 - val_acc: 0.9825
Epoch 11/20
60000/60000 [==============================] - 3s 49us/step - loss: 0.0264 - acc: 0.9924 - val_loss: 0.0905 - val_acc: 0.9832
Epoch 12/20
60000/60000 [==============================] - 3s 49us/step - loss: 0.0257 - acc: 0.9926 - val_loss: 0.0868 - val_acc: 0.9836
Epoch 13/20
60000/60000 [==============================] - 3s 49us/step - loss: 0.0232 - acc: 0.9936 - val_loss: 0.0944 - val_acc: 0.9837
Epoch 14/20
60000/60000 [==============================] - 3s 49us/step - loss: 0.0217 - acc: 0.9941 - val_loss: 0.1022 - val_acc: 0.9834
Epoch 15/20
60000/60000 [==============================] - 3s 49us/step - loss: 0.0223 - acc: 0.9938 - val_loss: 0.0952 - val_acc: 0.9816
Epoch 16/20
60000/60000 [==============================] - 3s 49us/step - loss: 0.0190 - acc: 0.9949 - val_loss: 0.1015 - val_acc: 0.9840
Epoch 17/20
60000/60000 [==============================] - 3s 49us/step - loss: 0.0197 - acc: 0.9946 - val_loss: 0.1161 - val_acc: 0.9848
Epoch 18/20
60000/60000 [==============================] - 3s 49us/step - loss: 0.0210 - acc: 0.9945 - val_loss: 0.1078 - val_acc: 0.9822
Epoch 19/20
60000/60000 [==============================] - 3s 49us/step - loss: 0.0190 - acc: 0.9950 - val_loss: 0.1235 - val_acc: 0.9832
Epoch 20/20
60000/60000 [==============================] - 3s 49us/step - loss: 0.0185 - acc: 0.9954 - val_loss: 0.0980 - val_acc: 0.9843
Test loss: 0.0980480428516
Test accuracy: 0.9843

函数解析和常见问题

使用keras进行训练,默认使用单显卡,即使设置了os.environ['CUDA_VISIBLE_DEVICES']为两张显卡,也只是占满了显存,再设置tf.GPUOptions(allow_growth=True)之后可以清楚看到,只占用了第一张显卡,第二张显卡完全没用。要使用多张显卡,需要按如下步骤:
(1)import multi_gpu_model函数:from keras.utils import multi_gpu_model
(2)在定义好model之后,使用multi_gpu_model设置模型由几张显卡训练,如下:

model=Model(...) #定义模型结构
model_parallel=multi_gpu_model(model,gpu=n) #使用几张显卡n等于几
model_parallel.compile(...) #注意是model_parallel,不是model

通过以上代码,model将作为CPU上的原始模型,而model_parallel将作为拷贝模型被复制到各个GPU上进行梯度计算。如果batchsize为128,显卡n=2,则每张显卡单独计算128/2=64张图像,然后在CPU上将两张显卡计算得到的梯度进行融合更新,并对模型权重进行更新后再将新模型拷贝到GPU再次训练。
(3)从上面可以看出,进行训练时,仍然在model_parallel上进行:

model_parallel.fit(...) #注意是model_parallel

(4)保存模型时,model_parallel保存了训练时显卡数量的信息,所以如果直接保存model_parallel的话,只能将模型设置为相同数量的显卡调用,否则训练的模型将不能调用。因此,为了之后的调用方便,只保存CPU上的模型,即model:

model.save(...) #注意是model,不是model_parallel

如果用到了callback函数,则默认保存的也是model_parallel(因为训练函数是针对model_parallel的),所以要用回调函数保存model的话需要自己对回调函数进行定义:

class OwnCheckpoint(keras.callbacks.Callback):
    def __init__(self,model):
        self.model_to_save=model
    def on_epoch_end(self,epoch,logs=None):  #这里logs必须写
        self.model_to_save.save('model_advanced/model_%d.h5' % epoch)

checkpoint=OwnCheckpoint(model)
model_parallel.fit_generator(...,callbacks=[checkpoint])

以上是每个都保存,很占空间,可以改为如下代码:

class CustomModelCheckpoint(keras.callbacks.Callback):
 
    def __init__(self, model, path):
        self.model = model
        self.path = path
        self.best_loss = np.inf
 
    def on_epoch_end(self, epoch, logs=None):
        val_loss = logs['val_loss']
        if val_loss < self.best_loss:
            print("\nValidation loss decreased from {} to {}, saving model".format(self.best_loss, val_loss))
            self.model.save_weights(self.path, overwrite=True)
            self.best_loss = val_loss
 
model.fit(...,callbacks=[CustomModelCheckpoint(model, '/path/to/save/model.h5')])

即便如此,要是模型还是太大,就需要下面的方法了,保存成npy格式而不是hdf5格式:

# save model
weight = self.model.get_weights()
np.save(self.path+'.npy', weight)
# load model
weight = np.load(load_path)
model.set_weights(weight)

(5)Compile the model,如果是普通的网络结构,那么没有问题,像上述的编译代码即可model.compile(optimizer=Adam(lr=1e-5), loss='binary_crossentropy', metrics = ['accuracy'])) 。不过,如果是Multi-task的网络,例如Faster-RCNN,它由多个输出支路,也就是多个loss,在网络定义的时候一般会给命名,然后编译的时候找到不同支路layer的名字即可,就像这样:

model.compile(optimizer=optimizer, 
              loss={'main_output': jaccard_distance_loss, 'aux_output': 'binary_crossentropy'},
              metrics={'main_output': jaccard_distance_loss, 'aux_output': 'acc'},
              loss_weights={'main_output': 1., 'aux_output': 0.5})

其中main_outputaux_output就是认为定义的layer name,但是如果用了keras.utils.training_utils.multi_gpu_model()以后,名字就自动换掉了,变成默认的concatenate_1concatenate_2等等,因此你需要先model.summary()一下,打印出来网络结构,然后弄明白哪个输出代表哪个支路,然后重新编译网络,如下:

from keras.optimizers import Adam, RMSprop, SGD
model.compile(optimizer=RMSprop(lr=0.045, rho=0.9, epsilon=1.0), 
              loss={'concatenate_1': jaccard_distance_loss, 'concatenate_2': 'binary_crossentropy'},
              metrics={'concatenate_1': jaccard_distance_loss, 'concatenate_2': 'acc'},
              loss_weights={'concatenate_1': 1., 'concatenate_2': 0.5})

参考

https://blog.csdn.net/u010122972/article/details/84784245 

https://blog.csdn.net/xiewenbo/article/details/79578196

https://www.jianshu.com/p/db0ba022936f

官方英文:https://keras.io/utils/#multi_gpu_model

官方中文:https://keras.io/zh/utils/

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值