这里写自定义目录标题
tensorflow的安装
安装tensorflow docker版本很简单,按官网的指南安装GPU支持的docker版本只需要三步
- 在本地主机上安装 Docker。
- 要在 Linux 上启用 GPU 支持,请安装 nvidia-docker。
- 启动 TensorFlow Docker 容器
使用仅支持 CPU 的映像的示例
docker run -it --rm tensorflow/tensorflow \
python -c "import tensorflow as tf; tf.enable_eager_execution(); print(tf.reduce_sum(tf.random_normal([1000, 1000])))"
GPU 支持
检查 GPU 是否可用:
lspci | grep -i nvidia
验证 nvidia-docker 安装:
docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi
使用支持 GPU 的映像
下载并运行支持 GPU 的 TensorFlow 映像:
docker run --runtime=nvidia -it --rm tensorflow/tensorflow:latest-gpu \
python -c "import tensorflow as tf; tf.enable_eager_execution(); print(tf.reduce_sum(tf.random_normal([1000, 1000])))"
可能需要几分钟的时间,然后安装就完成了
遇到的问题
显存不足
报错1
2019-03-12 07:19:15.563496: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-03-12 07:19:17.012829: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcuda.so.1
2019-03-12 07:19:18.953407: W tensorflow/compiler/xla/service/platform_util.cc:256] unable to create StreamExecutor for CUDA:0: failed initializing StreamExecutor for CUDA device ordinal 0: Internal: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_OUT_OF_MEMORY: out of memory; total memory reported: 12788498432
2019-03-12 07:19:18.953905: I tensorflow/compiler/xla/service/service.cc:162] XLA service 0x510ff40 executing computations on platform CUDA. Devices:
2019-03-12 07:19:18.954126: I tensorflow/compiler/xla/service/service.cc:169] StreamExecutor device (0): TITAN Xp, Compute Capability 6.1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7