使用Anaconda安装使用Cuda 11.0 和 CuDnn 8环境的Tensorflow-gpu 2.4.0

这篇博客提供了一种简单的方法来安装Tensorflow-GPU2.4.0,配合CUDA11.0和CuDnn8。首先从NVIDIA官网下载CUDA11.0和CuDnn8,然后创建一个conda环境,安装Python3.7。接着,通过Anaconda命令安装CUDA工具包,并使用pip安装tensorflow-gpu。最后,将CuDnn的bin文件复制到conda环境的bin目录下完成配置。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Install Tensorflow-gpu 2.4.0 with Cuda 11.0 and CuDnn 8 Using Anaconda

Photo by Christian Wiediger on Unsplash

Have you been frustrated, installing Tensorflow Gpu with Cuda and all stuff; If yes, This Blog is for you, here you’ll get an easy way to install Tensorflow GPU with the latest versions.CUDA is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers can dramatically speed up computing applications by harnessing the power of GPUs.

The CUDA Toolkit from NVIDIA provides everything you need to develop GPU-accelerated applications. This CUDA Toolkit includes GPU-accelerated libraries and the CUDA runtime for the Conda ecosystem. For the full CUDA Toolkit with a compiler and development tools visit CUDA Toolkit 11.6 Update 2 Downloads | NVIDIA Developer

License Agreements The packages are governed by the CUDA Toolkit End User License Agreement (EULA). By downloading and using the packages, you accept the terms and conditions of the CUDA EULA — EULA :: CUDA Toolkit Documentation

Here is the Version list of all the Libraries:

tensorflow-gpu==2.4.0

cudatoolkit==11.0

cuDnn==8

python==3.7 (or later)

We’ll be following 6 steps in order to install, tensorflow-gpu version 2.4 successfully.

  1. First of all Download Cuda 11.0 compactable, CuDnn version 8 from Nvidia’s official website here. Then extract and Keep it aside, the files should look like this,

Downloaded CuDnn 8 should have these files.

2. Create a new Conda environment with python 3.7 or later,

conda create -n myenv python=3.7

run the above code to create a new environment with python 3.7.

3. Here comes the main part, now we need to install Cuda toolkit, You can download it from Nvidia’s official website or directly using Anaconda prompt in 2 steps,

conda activate <env>conda install cudatoolkit

Official Conda webite

Just running the above code will install Cuda 11.0 within the environment and make us happy.

4. TensorFlow is an open-source software library for high-performance numerical computation. Its flexible architecture allows easy deployment of computation across a variety of platforms (CPUs, GPUs, TPUs), and from desktops to clusters of servers to mobile and edge devices.

Originally developed by researchers and engineers from the Google Brain team within Google’s AI organization, it comes with strong support for machine learning and deep learning and the flexible numerical computation core is used across many other scientific domains. Now it’s time for the installation of Tensorflow; the latest TensorFlow version is 2.4 and we need not install TensorFlow cause, tensorflow-gpu includes all. Python makes it even easier for us,

pip install tensorflow-gpu

The above command installs Tensorflow gpu version, Tensorflow estimator, Tensorflow base. Don’t use conda here cause, it’ll install Cuda 10.2 and cuDnn 7 along with that, so it may conflict with the new version installed.

The majority of the bugs, particularly in ML comes from Version confliction; it is the worst thing actually.

5. Now copy all the files from bin folder of the downloaded, cuDnn 8 folder. Then paste it in the bin folder of the conda environment folder, usually you could find the path from user in C,

C:\Users\<name>\anaconda3\envs\<env name>\Library\bin

Paste the DLL files here, and, That’s it! you made it!. Now you are ready for the GPU revolution.

Though the price of a nice performing GPU is still high, you could use the online cloud platform to render training usually they are faster than Nvidia’s MX series like Google Colaboratotry. If you have the latest GPU version, like GeForce RTX 3060 Ti or Titan series you could use the steps mentioned above to utilize the GPU.

Actually, this is my first blog, and I’m so excited to get feedback from you all, follow me on Linkedin and Github to collaborate with me. See you in the Blog,

Thank you,

Suriya

自编译tensorflow: 1.python3.5,tensorflow1.12; 2.支持cuda10.0,cudnn7.3.1,TensorRT-5.0.2.6-cuda10.0-cudnn7.3; 3.支持mkl,无MPI; 软硬件硬件环境:Ubuntu16.04,GeForce GTX 1080 配置信息: hp@dla:~/work/ts_compile/tensorflow$ ./configure WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown". You have bazel 0.19.1 installed. Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python3 Found possible Python library paths: /usr/local/lib/python3.5/dist-packages /usr/lib/python3/dist-packages Please input the desired Python library path to use. Default is [/usr/local/lib/python3.5/dist-packages] Do you wish to build TensorFlow with XLA JIT support? [Y/n]: XLA JIT support will be enabled for TensorFlow. Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: No OpenCL SYCL support will be enabled for TensorFlow. Do you wish to build TensorFlow with ROCm support? [y/N]: No ROCm support will be enabled for TensorFlow. Do you wish to build TensorFlow with CUDA support? [y/N]: y CUDA support will be enabled for TensorFlow. Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 10.0]: Please specify the location where CUDA 10.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: /usr/local/cuda-10.0 Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7]: 7.3.1 Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda-10.0]: Do you wish to build TensorFlow with TensorRT support? [y/N]: y TensorRT support will be enabled for TensorFlow. Please specify the location where TensorRT is installed. [Default is /usr/lib/x86_64-linux-gnu]:/home/hp/bin/TensorRT-5.0.2.6-cuda10.0-cudnn7.3/targets/x86_64-linux-gnu Please specify the locally installed NCCL version you want to use. [Default is to use https://github.com/nvidia/nccl]: Please specify a list of comma-separated Cuda compute capabilities you want to build with. You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus. Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 6.1,6.1,6.1]: Do you want to use clang as CUDA compiler? [y/N]: nvcc will be used as CUDA compiler. Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: Do you wish to build TensorFlow with MPI support? [y/N]: No MPI support will be enabled for TensorFlow. Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]: Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: Not configuring the WORKSPACE for Android builds. Preconfigured Bazel build configs. You can use any of the below by adding "--config=" to your build command. See .bazelrc for more details. --config=mkl # Build with MKL support. --config=monolithic # Config for mostly static monolithic build. --config=gdr # Build with GDR support. --config=verbs # Build with libverbs support. --config=ngraph # Build with Intel nGraph support. --config=dynamic_kernels # (Experimental) Build kernels into separate shared objects. Preconfigured Bazel build configs to DISABLE default on features: --config=noaws # Disable AWS S3 filesystem support. --config=nogcp # Disable GCP support. --config=nohdfs # Disable HDFS support. --config=noignite # Disable Apacha Ignite support. --config=nokafka # Disable Apache Kafka support. --config=nonccl # Disable NVIDIA NCCL support. Configuration finished 编译: hp@dla:~/work/ts_compile/tensorflow$ bazel build --config=opt --config=mkl --verbose_failures //tensorflow/tools/pip_package:build_pip_package 卸载已有tensorflow: hp@dla:~/temp$ sudo pip3 uninstall tensorflow 安装自己编译的成果: hp@dla:~/temp$ sudo pip3 install tensorflow-1.12.0-cp35-cp35m-linux_x86_64.whl
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值