系统环境:
OS: UBUNTU18.04
CUDA:10.1
Tensorflow 2.1
cuDNN: 7.6.5
TensorRT: 6.0.15(tf2.1支持TensorRT6.0)
GPU: RTX2080(8G)*2
使用新版本tensorflow(2.1支持的CUDA版本为10.1,2.0支持的版本为10.0)时,出现了如下错误(错误复现代码地址:https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py):
2020-01-16 21:49:19.892263: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2020-01-16 21:49:19.897303: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2020-01-16 21:49:19.897396: W tensorflow/core/common_runtime/base_collective_executor.cc:217] BaseCollectiveExecutor::StartAbort Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[{{node conv2d_1/convolution}}]]
通过查看tensorflow库中的issues讨论得知问题出在RTX2070/2080显卡的显存分配问题上,按照issues中提到的方法,在程序开头部分添加下述代码:
# gpus= tf.config.experimental.list_physical_devices('GPU')
gpus= tf.config.list_physical_devices('GPU') # tf2.1版本该函数不再是experimental
print(gpus) # 前面限定了只使用GPU1(索引是从0开始的,本机有2张RTX2080显卡)
tf.config.experimental.set_memory_growth(gpus[0], True) # 其实gpus本身就只有一个元素
但是在我自己得到环境中出现了另外一种错误:
ValueError: Memory growth cannot differ between GPU devices
看提示应该是GPU之间冲突的原因,因此我尝试只使用一个GPU:
import os
os.environ['CUDA_VISIBLE_DEVICES']='1'
这样就解决该错误了