1. 简介
WSL(windows subsystem for linux)可用于在 Windows 计算机上运行 Linux 环境,而无需单独的虚拟机或双引导。 WSL 旨在为希望同时使用 Windows 和 Linux 的开发人员提供无缝高效的体验。通过反复尝试各种论坛和官网的配置教程,总结了一套对本人来说最简单有效的办法,无需繁冗且有风险地configure from root。
前提:
- 主系统windows10/11专业版
- 能够使用cuda的Nvidia显卡
- 主系统上已经安装好了N卡驱动(下个GeForce吧)
2. 配置
2.1 安装wsl
- 以管理员运行windows powershell,分别输入:
wsl --list --online #联网查看可供安装的wsl distro
wsl --install -d Ubuntu-20.04 #选择安装<Ubuntu-20.04>,可替换成你想安装的distro
- 等待安装完成,完成后需要注册wsl用户名和密码,注册后自动激活。以下操作可选(optional):
ctrl+D退出子系统。继续在powershell中输入:
wsl -l -v #查看系统目前的wsl distro(可以看到刚刚安装的Ubuntu)
wsl --setdefault Ubuntu-20.04 #设置默认distro,当你再次输入wsl时为你自动进入Ubuntu-20.04
wsl #再次进入Ubuntu
- (optional)在应用商店下载windows terminal辅助管理多个console。
2.2 安装miniconda
如果你windows上已经安装好了Nvidia驱动,linux上无需再安装任何驱动,可以通过nvidia-smi
验证。
- 进入miniconda官网, 选择Linux64-Bit (x86) Installer (1007.9M),下载到
c/Users/Administrator/Downloads
- powershell中输入
wsl
或者wsl -d <distro_name>
激活Ubuntu,输入cd ./Downloads
,和ls查看刚刚下载的文件,我的是Anaconda3-2024.06-1-Linux-x86_64.sh。继续输入:
bash Anaconda3-2024.06-1-Linux-x86_64.sh -b -u -p ~/miniconda
- 完成安装。
2.3 新建虚拟环境
以下熟悉anaconda的同学就不多做介绍了,我安装的都是最新版本。
conda create -n python3.9 python==3.9
conda activate python3.9
conda install cudatoolkit cudnn tensorflow[and-gpu]
验证:输入python
import tensorflow as tf
tf.test.is_gpu_available()
#或者
tf.config.list_physical_devices('GPU')
我的输出:
[GCC 11.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
2024-09-09 23:44:05.265418: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-09-09 23:44:05.317943: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-09-09 23:44:05.332891: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-09-09 23:44:05.431682: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-09-09 23:44:06.234037: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
>>> tf.test.is_gpu_available()
WARNING:tensorflow:From <stdin>:1: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1725896789.881820 1035 cuda_executor.cc:1001] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
I0000 00:00:1725896789.999519 1035 cuda_executor.cc:1001] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
I0000 00:00:1725896789.999730 1035 cuda_executor.cc:1001] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
I0000 00:00:1725896790.106736 1035 cuda_executor.cc:1001] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
I0000 00:00:1725896790.106951 1035 cuda_executor.cc:1001] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2024-09-09 23:46:30.106977: I tensorflow/core/common_runtime/gpu/gpu_device.cc:2112] Could not identify NUMA node of platform GPU id 0, defaulting to 0. Your kernel may not have been built with NUMA support.
I0000 00:00:1725896790.107247 1035 cuda_executor.cc:1001] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2024-09-09 23:46:30.107619: I tensorflow/core/common_runtime/gpu/gpu_device.cc:2021] Created device /device:GPU:0 with 21458 MB memory: -> device: 0, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:01:00.0, compute capability: 8.9
True
>>> tf.config.list_physical_devices('GPU')
I0000 00:00:1725896796.844958 1035 cuda_executor.cc:1001] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
I0000 00:00:1725896796.845343 1035 cuda_executor.cc:1001] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
I0000 00:00:1725896796.845537 1035 cuda_executor.cc:1001] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
2.3为pycharm链接wsl解释器
- 打开pycharm专业版
- 添加python解释器 ,左侧选择WSL,pycharm能自动识别到Ubuntu,解释器路径选择
/home/<usr>/miniconda/envs/python3.9/bin/python3
,将<usr>
替换成你的Ubuntu用户名。如下图
点击确定,pycharm会自动创建本地项目到linux的路径映射,你在windows上的数据读写可以无缝同步到wsl虚拟存储中。如此就能够在windows本地编辑并运行linux项目。
3. 总结
本方法不同于网上的其他攻略,无需从root配置wsl的cuda和compiler等等(本人尝试了大量版本都以失败告终),而是直接通过conda实现一站式配置,目前只尝试了所有packages的最新版本,欢迎评论分享其他可行的cudatoolkit+cudnn+tensorflow的版本组合。