StableDiffusion启动时报错RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMAN

场景:

使用StableDiffusion时,启动webui-user.bat报错RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

原因分析

在使用sd-wav2lip-uhq插件以后,启动sd就不对劲了(我还以为是今天重装显卡哪块接触不良),究其原因发现sd为了插件所需依赖能自动适配(也就是sd目录下的venv文件夹),创建虚拟环境自动管理,这插件不适配cuda12.1以上

采取对策记录(可以忽略这段):

–skip-torch-cuda-test

命令行中添加set COMMANDLINE_ARGS= --lowvram --precision full --no-half --skip-torch-cuda-test
在这里插入图片描述

但是问题没有解决,sd内部仍然使用的pytorch+cpu,会导致画图使用cpu
在这里插入图片描述

并且sd内使用的还是cpu版本的pytorch
在这里插入图片描述

卸载并安装对应cuda版本的pytorch

pip3 uninstall torch
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

在这里插入图片描述

删除StableDiffusion文件夹的venv,再次启动

参考
在这里插入图片描述

在这里插入图片描述
这里报错下不了依赖,看另一个文章换下镜像源就行

venv "D:\StableDiffusion\stable-diffusion-webui-master\venv\Scripts\Python.exe"
fatal: ambiguous argument 'HEAD': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.7.0
Commit hash: <none>
Installing requirements for CodeFormer
Traceback (most recent call last):
  File "D:\StableDiffusion\stable-diffusion-webui-master\launch.py", line 48, in <module>
    main()
  File "D:\StableDiffusion\stable-diffusion-webui-master\launch.py", line 39, in main
    prepare_environment()
  File "D:\StableDiffusion\stable-diffusion-webui-master\modules\launch_utils.py", line 417, in prepare_environment
    run_pip(f"install -r \"{os.path.join(repo_dir('CodeFormer'), 'requirements.txt')}\"", "requirements for CodeFormer")
  File "D:\StableDiffusion\stable-diffusion-webui-master\modules\launch_utils.py", line 144, in run_pip
    return run(f'"{python}" -m pip {command} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}", live=live)
  File "D:\StableDiffusion\stable-diffusion-webui-master\modules\launch_utils.py", line 116, in run
    raise RuntimeError("\n".join(error_bits))
RuntimeError: Couldn't install requirements for CodeFormer.
Command: "D:\StableDiffusion\stable-diffusion-webui-master\venv\Scripts\python.exe" -m pip install -r "D:\StableDiffusion\stable-diffusion-webui-master\repositories\CodeFormer\requirements.txt" --prefer-binary
Error code: 1
stdout: Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Collecting addict
  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/6a/00/b08f23b7d7e1e14ce01419a467b583edbb93c6cdb8654e54a9cc579cd61f/addict-2.4.0-py3-none-any.whl (3.8 kB)
Collecting future
  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/da/71/ae30dadffc90b9006d77af76b393cb9dfbfc9629f339fc1574a1c52e6806/future-1.0.0-py3-none-any.whl (491 kB)
Collecting lmdb
  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/66/05/21a93eed7ff800f7c3b0538eb12bde89660a44693624cd0e49141beccb8b/lmdb-1.4.1-cp310-cp310-win_amd64.whl (100 kB)
Requirement already satisfied: numpy in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from -r D:\StableDiffusion\stable-diffusion-webui-master\repositories\CodeFormer\requirements.txt (line 4)) (1.26.4)
Collecting opencv-python
  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/c7/ec/9dabb6a9abfdebb3c45b0cc52dec901caafef2b2c7e7d6a839ed86d81e91/opencv_python-4.9.0.80-cp37-abi3-win_amd64.whl (38.6 MB)
Requirement already satisfied: Pillow in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from -r D:\StableDiffusion\stable-diffusion-webui-master\repositories\CodeFormer\requirements.txt (line 6)) (10.2.0)
Requirement already satisfied: pyyaml in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from -r D:\StableDiffusion\stable-diffusion-webui-master\repositories\CodeFormer\requirements.txt (line 7)) (6.0.1)
Requirement already satisfied: requests in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from -r D:\StableDiffusion\stable-diffusion-webui-master\repositories\CodeFormer\requirements.txt (line 8)) (2.31.0)
Collecting scikit-image
  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/86/f0/18895318109f9b508f2310f136922e455a453550826a8240b412063c2528/scikit_image-0.22.0-cp310-cp310-win_amd64.whl (24.5 MB)
Collecting scipy
  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/fd/a7/5f829b100d208c85163aecba93faf01d088d944fc91585338751d812f1e4/scipy-1.12.0-cp310-cp310-win_amd64.whl (46.2 MB)

stderr: ERROR: Ignored the following versions that require a different python version: 1.6.2 Requires-Python >=3.7,<3.10; 1.6.3 Requires-Python >=3.7,<3.10; 1.7.0 Requires-Python >=3.7,<3.10; 1.7.1 Requires-Python >=3.7,<3.10
ERROR: Could not find a version that satisfies the requirement tb-nightly (from versions: none)
ERROR: No matching distribution found for tb-nightly

[notice] A new release of pip available: 22.2.1 -> 24.0
[notice] To update, run: D:\StableDiffusion\stable-diffusion-webui-master\venv\Scripts\python.exe -m pip install --upgrade pip

torch这里还是出错

venv "D:\StableDiffusion\stable-diffusion-webui-master\venv\Scripts\Python.exe"
fatal: ambiguous argument 'HEAD': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.7.0
Commit hash: <none>
Installing requirements for CodeFormer
Installing requirements
Looking in indexes: https://mirrors.aliyun.com/pypi/simple
Collecting ultralytics>=8.1.18
  Downloading https://mirrors.aliyun.com/pypi/packages/6b/05/f72e86377a0a412fc0a0a03bc583a03cd3fede2d620e3a849fa826dd4ef5/ultralytics-8.1.33-py3-none-any.whl (723 kB)
     -------------------------------------- 723.1/723.1 kB 7.7 MB/s eta 0:00:00
Requirement already satisfied: pandas>=1.1.4 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from ultralytics>=8.1.18) (2.2.1)
Requirement already satisfied: psutil in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from ultralytics>=8.1.18) (5.9.5)
Requirement already satisfied: pyyaml>=5.3.1 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from ultralytics>=8.1.18) (6.0.1)
Collecting seaborn>=0.11.0
  Downloading https://mirrors.aliyun.com/pypi/packages/83/11/00d3c3dfc25ad54e731d91449895a79e4bf2384dc3ac01809010ba88f6d5/seaborn-0.13.2-py3-none-any.whl (294 kB)
     ------------------------------------- 294.9/294.9 kB 19.0 MB/s eta 0:00:00
Requirement already satisfied: pillow>=7.1.2 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from ultralytics>=8.1.18) (9.5.0)
Requirement already satisfied: opencv-python>=4.6.0 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from ultralytics>=8.1.18) (4.9.0.80)
Requirement already satisfied: scipy>=1.4.1 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from ultralytics>=8.1.18) (1.12.0)
Requirement already satisfied: torch>=1.8.0 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from ultralytics>=8.1.18) (2.0.1+cu118)
Requirement already satisfied: torchvision>=0.9.0 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from ultralytics>=8.1.18) (0.15.2+cu118)
Collecting py-cpuinfo
  Downloading https://mirrors.aliyun.com/pypi/packages/e0/a9/023730ba63db1e494a271cb018dcd361bd2c917ba7004c3e49d5daf795a2/py_cpuinfo-9.0.0-py3-none-any.whl (22 kB)
Collecting thop>=0.1.1
  Downloading https://mirrors.aliyun.com/pypi/packages/bb/0f/72beeab4ff5221dc47127c80f8834b4bcd0cb36f6ba91c0b1d04a1233403/thop-0.1.1.post2209072238-py3-none-any.whl (15 kB)
Requirement already satisfied: tqdm>=4.64.0 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from ultralytics>=8.1.18) (4.66.2)
Requirement already satisfied: requests>=2.23.0 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from ultralytics>=8.1.18) (2.31.0)
Requirement already satisfied: matplotlib>=3.3.0 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from ultralytics>=8.1.18) (3.8.3)
Requirement already satisfied: fonttools>=4.22.0 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from matplotlib>=3.3.0->ultralytics>=8.1.18) (4.50.0)
Requirement already satisfied: packaging>=20.0 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from matplotlib>=3.3.0->ultralytics>=8.1.18) (24.0)
Requirement already satisfied: python-dateutil>=2.7 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from matplotlib>=3.3.0->ultralytics>=8.1.18) (2.9.0.post0)
Requirement already satisfied: pyparsing>=2.3.1 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from matplotlib>=3.3.0->ultralytics>=8.1.18) (3.1.2)
Requirement already satisfied: numpy<2,>=1.21 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from matplotlib>=3.3.0->ultralytics>=8.1.18) (1.23.5)
Requirement already satisfied: contourpy>=1.0.1 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from matplotlib>=3.3.0->ultralytics>=8.1.18) (1.2.0)
Requirement already satisfied: cycler>=0.10 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from matplotlib>=3.3.0->ultralytics>=8.1.18) (0.12.1)
Requirement already satisfied: kiwisolver>=1.3.1 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from matplotlib>=3.3.0->ultralytics>=8.1.18) (1.4.5)
Requirement already satisfied: tzdata>=2022.7 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from pandas>=1.1.4->ultralytics>=8.1.18) (2024.1)
Requirement already satisfied: pytz>=2020.1 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from pandas>=1.1.4->ultralytics>=8.1.18) (2024.1)
Requirement already satisfied: urllib3<3,>=1.21.1 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from requests>=2.23.0->ultralytics>=8.1.18) (2.2.1)
Requirement already satisfied: certifi>=2017.4.17 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from requests>=2.23.0->ultralytics>=8.1.18) (2024.2.2)
Requirement already satisfied: charset-normalizer<4,>=2 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from requests>=2.23.0->ultralytics>=8.1.18) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from requests>=2.23.0->ultralytics>=8.1.18) (3.6)
Requirement already satisfied: jinja2 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from torch>=1.8.0->ultralytics>=8.1.18) (3.1.3)
Requirement already satisfied: filelock in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from torch>=1.8.0->ultralytics>=8.1.18) (3.13.1)
Requirement already satisfied: typing-extensions in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from torch>=1.8.0->ultralytics>=8.1.18) (4.10.0)
Requirement already satisfied: networkx in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from torch>=1.8.0->ultralytics>=8.1.18) (3.2.1)
Requirement already satisfied: sympy in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from torch>=1.8.0->ultralytics>=8.1.18) (1.12)
Requirement already satisfied: colorama in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from tqdm>=4.64.0->ultralytics>=8.1.18) (0.4.6)
Requirement already satisfied: six>=1.5 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from python-dateutil>=2.7->matplotlib>=3.3.0->ultralytics>=8.1.18) (1.16.0)
Requirement already satisfied: MarkupSafe>=2.0 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from jinja2->torch>=1.8.0->ultralytics>=8.1.18) (2.1.5)
Requirement already satisfied: mpmath>=0.19 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from sympy->torch>=1.8.0->ultralytics>=8.1.18) (1.3.0)
Installing collected packages: py-cpuinfo, thop, seaborn, ultralytics
Successfully installed py-cpuinfo-9.0.0 seaborn-0.13.2 thop-0.1.1.post2209072238 ultralytics-8.1.33
Looking in indexes: https://mirrors.aliyun.com/pypi/simple
Collecting mediapipe>=0.10.9
  Downloading https://mirrors.aliyun.com/pypi/packages/12/50/9c24e158350d3f93be669db291fb452f21a25d874c94c5758374be82fff1/mediapipe-0.10.11-cp310-cp310-win_amd64.whl (50.8 MB)
     ---------------------------------------- 50.8/50.8 MB 8.6 MB/s eta 0:00:00
Requirement already satisfied: numpy in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from mediapipe>=0.10.9) (1.23.5)
Collecting jax
  Downloading https://mirrors.aliyun.com/pypi/packages/ad/29/37cc2d58775917e6da532ef59cd3a66133d4de73fce1c16852e8475e5411/jax-0.4.25-py3-none-any.whl (1.8 MB)
     ---------------------------------------- 1.8/1.8 MB 9.6 MB/s eta 0:00:00
Collecting opencv-contrib-python
  Downloading https://mirrors.aliyun.com/pypi/packages/aa/2e/576ac47f21d555b459ca837bb3fb937e50339b8fbfd294945ea2f5290416/opencv_contrib_python-4.9.0.80-cp37-abi3-win_amd64.whl (45.3 MB)
     ---------------------------------------- 45.3/45.3 MB 8.3 MB/s eta 0:00:00
Requirement already satisfied: protobuf<4,>=3.11 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from mediapipe>=0.10.9) (3.20.0)
Collecting sounddevice>=0.4.4
  Downloading https://mirrors.aliyun.com/pypi/packages/39/ae/5e84220bfca4256e4ca2a62a174636089ab6ff671b5f9ddd7e8238587acd/sounddevice-0.4.6-py3-none-win_amd64.whl (199 kB)
     ------------------------------------- 199.7/199.7 kB 11.8 MB/s eta 0:00:00
Requirement already satisfied: attrs>=19.1.0 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from mediapipe>=0.10.9) (23.2.0)
Collecting flatbuffers>=2.0
  Downloading https://mirrors.aliyun.com/pypi/packages/bf/45/c961e3cb6ddad76b325c163d730562bb6deb1ace5acbed0306f5fbefb90e/flatbuffers-24.3.7-py2.py3-none-any.whl (26 kB)
Requirement already satisfied: matplotlib in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from mediapipe>=0.10.9) (3.8.3)
Requirement already satisfied: absl-py in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from mediapipe>=0.10.9) (2.1.0)
Collecting CFFI>=1.0
  Downloading https://mirrors.aliyun.com/pypi/packages/be/3e/0b197d1bfbf386a90786b251dbf2634a15f2ea3d4e4070e99c7d1c7689cf/cffi-1.16.0-cp310-cp310-win_amd64.whl (181 kB)
     ------------------------------------- 181.6/181.6 kB 10.7 MB/s eta 0:00:00
Collecting ml-dtypes>=0.2.0
  Downloading https://mirrors.aliyun.com/pypi/packages/30/a5/0480b23b2213c746cd874894bc485eb49310d7045159a36c7c03cab729ce/ml_dtypes-0.3.2-cp310-cp310-win_amd64.whl (127 kB)
     -------------------------------------- 127.8/127.8 kB 7.3 MB/s eta 0:00:00
Collecting opt-einsum
  Downloading https://mirrors.aliyun.com/pypi/packages/bc/19/404708a7e54ad2798907210462fd950c3442ea51acc8790f3da48d2bee8b/opt_einsum-3.3.0-py3-none-any.whl (65 kB)
     ---------------------------------------- 65.5/65.5 kB 3.7 MB/s eta 0:00:00
Requirement already satisfied: scipy>=1.9 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from jax->mediapipe>=0.10.9) (1.12.0)
Requirement already satisfied: python-dateutil>=2.7 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from matplotlib->mediapipe>=0.10.9) (2.9.0.post0)
Requirement already satisfied: contourpy>=1.0.1 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from matplotlib->mediapipe>=0.10.9) (1.2.0)
Requirement already satisfied: cycler>=0.10 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from matplotlib->mediapipe>=0.10.9) (0.12.1)
Requirement already satisfied: fonttools>=4.22.0 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from matplotlib->mediapipe>=0.10.9) (4.50.0)
Requirement already satisfied: pillow>=8 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from matplotlib->mediapipe>=0.10.9) (9.5.0)
Requirement already satisfied: pyparsing>=2.3.1 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from matplotlib->mediapipe>=0.10.9) (3.1.2)
Requirement already satisfied: kiwisolver>=1.3.1 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from matplotlib->mediapipe>=0.10.9) (1.4.5)
Requirement already satisfied: packaging>=20.0 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from matplotlib->mediapipe>=0.10.9) (24.0)
Collecting pycparser
  Downloading https://mirrors.aliyun.com/pypi/packages/62/d5/5f610ebe421e85889f2e55e33b7f9a6795bd982198517d912eb1c76e1a53/pycparser-2.21-py2.py3-none-any.whl (118 kB)
     -------------------------------------- 118.7/118.7 kB 6.8 MB/s eta 0:00:00
Requirement already satisfied: six>=1.5 in d:\stablediffusion\stable-diffusion-webui-master\venv\lib\site-packages (from python-dateutil>=2.7->matplotlib->mediapipe>=0.10.9) (1.16.0)
Installing collected packages: flatbuffers, pycparser, opt-einsum, opencv-contrib-python, ml-dtypes, jax, CFFI, sounddevice, mediapipe
Successfully installed CFFI-1.16.0 flatbuffers-24.3.7 jax-0.4.25 mediapipe-0.10.11 ml-dtypes-0.3.2 opencv-contrib-python-4.9.0.80 opt-einsum-3.3.0 pycparser-2.21 sounddevice-0.4.6
Looking in indexes: https://mirrors.aliyun.com/pypi/simple
Collecting rich>=13.0.0
  Downloading https://mirrors.aliyun.com/pypi/packages/87/67/a37f6214d0e9fe57f6ae54b2956d550ca8365857f42a1ce0392bb21d9410/rich-13.7.1-py3-none-any.whl (240 kB)
     -------------------------------------- 240.7/240.7 kB 3.7 MB/s eta 0:00:00
Collecting markdown-it-py>=2.2.0
  Downloading https://mirrors.aliyun.com/pypi/packages/42/d7/1ec15b46af6af88f19b8e5ffea08fa375d433c998b8a7639e76935c14f1f/markdown_it_py-3.0.0-py3-none-any.whl (87 kB)
     ---------------------------------------- 87.5/87.5 kB 4.8 MB/s eta 0:00:00
Collecting pygments<3.0.0,>=2.13.0
  Downloading https://mirrors.aliyun.com/pypi/packages/97/9c/372fef8377a6e340b1704768d20daaded98bf13282b5327beb2e2fe2c7ef/pygments-2.17.2-py3-none-any.whl (1.2 MB)
     ---------------------------------------- 1.2/1.2 MB 8.3 MB/s eta 0:00:00
Collecting mdurl~=0.1
  Downloading https://mirrors.aliyun.com/pypi/packages/b3/38/89ba8ad64ae25be8de66a6d463314cf1eb366222074cfda9ee839c56a4b4/mdurl-0.1.2-py3-none-any.whl (10.0 kB)
Installing collected packages: pygments, mdurl, markdown-it-py, rich
Successfully installed markdown-it-py-3.0.0 mdurl-0.1.2 pygments-2.17.2 rich-13.7.1
Installing Deforum requirement: numexpr
Installing Deforum requirement: av
Installing Deforum requirement: pims
Installing Deforum requirement: imageio_ffmpeg
Installing requirements for Ebsynth Utility
current transparent-background 1.2.12
Installing requirements for Ebsynth Utility
Installing requirements for Ebsynth Utility
Installing wav2lip_uhq requirement: imutils
Installing wav2lip_uhq requirement: dlib-bin
Installing wav2lip_uhq requirement: librosa==0.10.0.post2
Installing wav2lip_uhq requirement: git+https://github.com/suno-ai/bark.git
Installing wav2lip_uhq requirement: insightface==0.7.3
Installing wav2lip_uhq requirement: onnx==1.14.0
Installing wav2lip_uhq requirement: onnxruntime==1.15.0
Installing wav2lip_uhq requirement: onnxruntime-gpu==1.15.0
Installing wav2lip_uhq requirement: opencv-python>=4.8.0
Installing wav2lip_uhq requirement: ifnude
Installing sd-webui-controlnet requirement: fvcore
Installing sd-webui-controlnet requirement: svglib
Installing sd-webui-controlnet requirement: handrefinerportable
Installing sd-webui-controlnet requirement: depth_anything
Installing sd-webui-infinite-image-browsing requirement: python-dotenv
Installing sd-webui-infinite-image-browsing requirement: pyfunctional
Installing requirements for Mov2mov
Installing requirements for ffmpeg
Launching Web UI with arguments:
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
*** Error running preload() for D:\StableDiffusion\stable-diffusion-webui-master\extensions\stable-diffusion-webui-wd14-tagger-master\preload.py
    Traceback (most recent call last):
      File "D:\StableDiffusion\stable-diffusion-webui-master\modules\script_loading.py", line 26, in preload_extensions
        module = load_module(preload_script)
      File "D:\StableDiffusion\stable-diffusion-webui-master\modules\script_loading.py", line 10, in load_module
        module_spec.loader.exec_module(module)
      File "<frozen importlib._bootstrap_external>", line 883, in exec_module
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "D:\StableDiffusion\stable-diffusion-webui-master\extensions\stable-diffusion-webui-wd14-tagger-master\preload.py", line 4, in <module>
        from modules.shared import models_path
    ImportError: cannot import name 'models_path' from partially initialized module 'modules.shared' (most likely due to a circular import) (D:\StableDiffusion\stable-diffusion-webui-master\modules\shared.py)

---
No module 'xformers'. Proceeding without it.
Style database not found: D:\StableDiffusion\stable-diffusion-webui-master\styles.csv
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
[-] ADetailer initialized. version: 24.3.0, num models: 13
*** Error loading script: ui.py
    Traceback (most recent call last):
      File "D:\StableDiffusion\stable-diffusion-webui-master\modules\scripts.py", line 469, in load_scripts
        script_module = script_loading.load_module(scriptfile.path)
      File "D:\StableDiffusion\stable-diffusion-webui-master\modules\script_loading.py", line 10, in load_module
        module_spec.loader.exec_module(module)
      File "<frozen importlib._bootstrap_external>", line 883, in exec_module
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "D:\StableDiffusion\stable-diffusion-webui-master\extensions\sd-wav2lip-uhq\scripts\ui.py", line 7, in <module>
        from scripts.bark.tts import TTS
      File "D:\StableDiffusion\stable-diffusion-webui-master\extensions\sd-wav2lip-uhq\scripts\bark\tts.py", line 5, in <module>
        from bark.generation import (
      File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\bark\__init__.py", line 1, in <module>
        from .api import generate_audio, text_to_semantic, semantic_to_waveform, save_as_prompt
      File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\bark\api.py", line 5, in <module>
        from .generation import codec_decode, generate_coarse, generate_fine, generate_text_semantic
      File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\bark\generation.py", line 6, in <module>
        from encodec import EncodecModel
      File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\encodec\__init__.py", line 12, in <module>
        from .model import EncodecModel
      File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\encodec\model.py", line 19, in <module>
        from .utils import _check_checksum, _linear_overlap_add, _get_checkpoint_url
      File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\encodec\utils.py", line 14, in <module>
        import torchaudio
      File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\torchaudio\__init__.py", line 2, in <module>
        from . import _extension  # noqa  # usort: skip
      File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\torchaudio\_extension\__init__.py", line 38, in <module>
        _load_lib("libtorchaudio")
      File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\torchaudio\_extension\utils.py", line 60, in _load_lib
        torch.ops.load_library(path)
      File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\torch\_ops.py", line 643, in load_library
        ctypes.CDLL(path)
      File "D:\Python\lib\ctypes\__init__.py", line 374, in __init__
        self._handle = _dlopen(self._name, mode)
    OSError: [WinError 127] 找不到指定的程序。

---
*** Error loading script: wav2lip_uhq.py
    Traceback (most recent call last):
      File "D:\StableDiffusion\stable-diffusion-webui-master\modules\scripts.py", line 469, in load_scripts
        script_module = script_loading.load_module(scriptfile.path)
      File "D:\StableDiffusion\stable-diffusion-webui-master\modules\script_loading.py", line 10, in load_module
        module_spec.loader.exec_module(module)
      File "<frozen importlib._bootstrap_external>", line 883, in exec_module
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "D:\StableDiffusion\stable-diffusion-webui-master\extensions\sd-wav2lip-uhq\scripts\wav2lip_uhq.py", line 11, in <module>
        init_wav2lip_uhq()
      File "D:\StableDiffusion\stable-diffusion-webui-master\extensions\sd-wav2lip-uhq\scripts\wav2lip_uhq.py", line 7, in init_wav2lip_uhq
        from ui import on_ui_tabs
      File "D:\StableDiffusion\stable-diffusion-webui-master\extensions\sd-wav2lip-uhq\scripts\ui.py", line 7, in <module>
        from scripts.bark.tts import TTS
      File "D:\StableDiffusion\stable-diffusion-webui-master\extensions\sd-wav2lip-uhq\scripts\bark\tts.py", line 5, in <module>
        from bark.generation import (
      File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\bark\__init__.py", line 1, in <module>
        from .api import generate_audio, text_to_semantic, semantic_to_waveform, save_as_prompt
      File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\bark\api.py", line 5, in <module>
        from .generation import codec_decode, generate_coarse, generate_fine, generate_text_semantic
      File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\bark\generation.py", line 6, in <module>
        from encodec import EncodecModel
      File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\encodec\__init__.py", line 12, in <module>
        from .model import EncodecModel
      File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\encodec\model.py", line 19, in <module>
        from .utils import _check_checksum, _linear_overlap_add, _get_checkpoint_url
      File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\encodec\utils.py", line 14, in <module>
        import torchaudio
      File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\torchaudio\__init__.py", line 2, in <module>
        from . import _extension  # noqa  # usort: skip
      File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\torchaudio\_extension\__init__.py", line 38, in <module>
        _load_lib("libtorchaudio")
      File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\torchaudio\_extension\utils.py", line 60, in _load_lib
        torch.ops.load_library(path)
      File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\torch\_ops.py", line 643, in load_library
        ctypes.CDLL(path)
      File "D:\Python\lib\ctypes\__init__.py", line 374, in __init__
        self._handle = _dlopen(self._name, mode)
    OSError: [WinError 127] 找不到指定的程序。

---
[AddNet] Updating model hashes...
100%|███████████████████████████████████████████████████████████████████████████████| 28/28 [00:00<00:00, 14004.35it/s]
[AddNet] Updating model hashes...
100%|███████████████████████████████████████████████████████████████████████████████| 28/28 [00:00<00:00, 13962.73it/s]
dirname:  D:\StableDiffusion\stable-diffusion-webui-master\localizations
localizations:  {'zh_CN': 'D:\\StableDiffusion\\stable-diffusion-webui-master\\extensions\\stable-diffusion-webui-localization-zh_CN-main\\localizations\\zh_CN.json'}
ControlNet preprocessor location: D:\StableDiffusion\stable-diffusion-webui-master\extensions\sd-webui-controlnet\annotator\downloads
2024-03-25 13:50:12,282 - ControlNet - INFO - ControlNet v1.1.441
2024-03-25 13:50:12,448 - ControlNet - INFO - ControlNet v1.1.441
Secret key loaded successfully.
*** Error loading script: m2m_ui.py
    Traceback (most recent call last):
      File "D:\StableDiffusion\stable-diffusion-webui-master\modules\scripts.py", line 469, in load_scripts
        script_module = script_loading.load_module(scriptfile.path)
      File "D:\StableDiffusion\stable-diffusion-webui-master\modules\script_loading.py", line 10, in load_module
        module_spec.loader.exec_module(module)
      File "<frozen importlib._bootstrap_external>", line 883, in exec_module
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "D:\StableDiffusion\stable-diffusion-webui-master\extensions\sd-webui-mov2mov-master\scripts\m2m_ui.py", line 12, in <module>
        from modules.ui import paste_symbol, clear_prompt_symbol, extra_networks_symbol, apply_style_symbol, save_style_symbol, \
    ImportError: cannot import name 'create_seed_inputs' from 'modules.ui' (D:\StableDiffusion\stable-diffusion-webui-master\modules\ui.py)

---
*** Error loading script: tagger.py
    Traceback (most recent call last):
      File "D:\StableDiffusion\stable-diffusion-webui-master\modules\scripts.py", line 469, in load_scripts
        script_module = script_loading.load_module(scriptfile.path)
      File "D:\StableDiffusion\stable-diffusion-webui-master\modules\script_loading.py", line 10, in load_module
        module_spec.loader.exec_module(module)
      File "<frozen importlib._bootstrap_external>", line 883, in exec_module
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "D:\StableDiffusion\stable-diffusion-webui-master\extensions\stable-diffusion-webui-wd14-tagger-master\scripts\tagger.py", line 5, in <module>
        from tagger.ui import on_ui_tabs
      File "D:\StableDiffusion\stable-diffusion-webui-master\extensions\stable-diffusion-webui-wd14-tagger-master\tagger\ui.py", line 10, in <module>
        from webui import wrap_gradio_gpu_call
    ImportError: cannot import name 'wrap_gradio_gpu_call' from 'webui' (D:\StableDiffusion\stable-diffusion-webui-master\webui.py)

---
Loading weights [876b4c7ba5] from D:\StableDiffusion\stable-diffusion-webui-master\models\Stable-diffusion\cetusMix_Whalefall2.safetensors
2024-03-25 13:50:12,760 - AnimateDiff - INFO - Injecting LCM to UI.
2024-03-25 13:50:13,672 - AnimateDiff - INFO - Hacking i2i-batch.
2024-03-25 13:50:13,731 - ControlNet - INFO - ControlNet UI callback registered.
*Deforum ControlNet support: enabled*
Creating model from config: D:\StableDiffusion\stable-diffusion-webui-master\configs\v1-inference.yaml
Running on local URL:  http://127.0.0.1:7860
Loading VAE weights specified in settings: D:\StableDiffusion\stable-diffusion-webui-master\models\VAE\abyssorangemix2SFW_abyssorangemix2Sfw.pt
Applying attention optimization: InvokeAI... done.
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
  File "D:\Python\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "D:\Python\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "D:\Python\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui-master\modules\initialize.py", line 147, in load_model
    shared.sd_model  # noqa: B018
  File "D:\StableDiffusion\stable-diffusion-webui-master\modules\shared_items.py", line 128, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "D:\StableDiffusion\stable-diffusion-webui-master\modules\sd_models.py", line 531, in get_sd_model
    load_model()
  File "D:\StableDiffusion\stable-diffusion-webui-master\modules\sd_models.py", line 681, in load_model
    sd_model.cond_stage_model_empty_prompt = get_empty_cond(sd_model)
  File "D:\StableDiffusion\stable-diffusion-webui-master\modules\sd_models.py", line 569, in get_empty_cond
    return sd_model.cond_stage_model([""])
  File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui-master\modules\sd_hijack_clip.py", line 234, in forward
    z = self.process_tokens(tokens, multipliers)
  File "D:\StableDiffusion\stable-diffusion-webui-master\modules\sd_hijack_clip.py", line 273, in process_tokens
    z = self.encode_with_transformers(tokens)
  File "D:\StableDiffusion\stable-diffusion-webui-master\modules\sd_hijack_clip.py", line 326, in encode_with_transformers
    outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
  File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward
    return self.text_model(
  File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 740, in forward
    encoder_outputs = self.encoder(
  File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 654, in forward
    layer_outputs = encoder_layer(
  File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 382, in forward
    hidden_states = self.layer_norm1(hidden_states)
  File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui-master\extensions-builtin\Lora\networks.py", line 531, in network_LayerNorm_forward
    return originals.LayerNorm_forward(self, input)
  File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\normalization.py", line 190, in forward
    return F.layer_norm(
  File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\functional.py", line 2515, in layer_norm
    return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'


Stable diffusion model failed to load
Exception in thread Thread-45 (load_model):
Traceback (most recent call last):
  File "D:\Python\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "D:\Python\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui-master\modules\initialize.py", line 153, in load_model
    devices.first_time_calculation()
  File "D:\StableDiffusion\stable-diffusion-webui-master\modules\devices.py", line 162, in first_time_calculation
    linear(x)
  File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui-master\extensions-builtin\Lora\networks.py", line 486, in network_Linear_forward
    return originals.Linear_forward(self, input)
  File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)
RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'

To create a public link, set `share=True` in `launch()`.
Startup time: 555.8s (prepare environment: 480.6s, import torch: 6.4s, import gradio: 2.2s, setup paths: 1.6s, initialize shared: 0.3s, other imports: 1.2s, setup codeformer: 0.2s, load scripts: 43.6s, create ui: 2.1s, gradio launch: 17.3s).
Traceback (most recent call last):
  File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui-master\modules\ui_extra_networks.py", line 419, in pages_html
    return refresh()
  File "D:\StableDiffusion\stable-diffusion-webui-master\modules\ui_extra_networks.py", line 425, in refresh
    pg.refresh()
  File "D:\StableDiffusion\stable-diffusion-webui-master\modules\ui_extra_networks_textual_inversion.py", line 13, in refresh
    sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True)
  File "D:\StableDiffusion\stable-diffusion-webui-master\modules\textual_inversion\textual_inversion.py", line 222, in load_textual_inversion_embeddings
    self.expected_shape = self.get_expected_shape()
  File "D:\StableDiffusion\stable-diffusion-webui-master\modules\textual_inversion\textual_inversion.py", line 154, in get_expected_shape
    vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
AttributeError: 'NoneType' object has no attribute 'cond_stage_model'
Traceback (most recent call last):
  File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "D:\StableDiffusion\stable-diffusion-webui-master\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "D:\StableDiffusion\stable-diffusion-webui-master\modules\ui_extra_networks.py", line 419, in pages_html
    return refresh()
  File "D:\StableDiffusion\stable-diffusion-webui-master\modules\ui_extra_networks.py", line 425, in refresh
    pg.refresh()
  File "D:\StableDiffusion\stable-diffusion-webui-master\modules\ui_extra_networks_textual_inversion.py", line 13, in refresh
    sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True)
  File "D:\StableDiffusion\stable-diffusion-webui-master\modules\textual_inversion\textual_inversion.py", line 222, in load_textual_inversion_embeddings
    self.expected_shape = self.get_expected_shape()
  File "D:\StableDiffusion\stable-diffusion-webui-master\modules\textual_inversion\textual_inversion.py", line 154, in get_expected_shape
    vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
AttributeError: 'NoneType' object has no attribute 'cond_stage_model'

并且这个过程中出现问题
在这里插入图片描述

解决方案

1、启动sd的虚拟环境,重装torch torchaudio torchvision

在这里插入图片描述
注意确认有没有进入虚拟环境(venv)、别运行sd,不然卸载过程会报错

pip3 uninstall torch torchaudio torchvision

在这里插入图片描述

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

2、修改launch_utils.py,指定下载torch的gpu版本

“D:\StableDiffusion\stable-diffusion-webui-master\modules\launch_utils.py”
在这里插入图片描述

    torch_command = os.environ.get('TORCH_COMMAND', f"pip install torch==2.2.1+cu121 torchvision==0.15.2+cu118 --extra-index-url https://download.pytorch.org/whl/cu121")

检查cuda以及torch版本的命令

查看cuda版本nvcc --version

D:\StableDiffusion\stable-diffusion-webui-master>nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:41:10_Pacific_Daylight_Time_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0

检查nvdia驱动,nvidia-smi,判断能否识别到gpu

D:\StableDiffusion\stable-diffusion-webui-master>nvidia-smi
Mon Mar 25 12:31:55 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 551.86                 Driver Version: 551.86         CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                     TCC/WDDM  | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 4070 Ti   WDDM  |   00000000:01:00.0  On |                  N/A |
|  0%   27C    P5             17W /  285W |    1356MiB /  12282MiB |     20%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A      4392    C+G   ...ience\NVIDIA GeForce Experience.exe      N/A      |
|    0   N/A  N/A      6156    C+G   ...on\122.0.2365.92\msedgewebview2.exe      N/A      |
|    0   N/A  N/A      6844    C+G   ...2txyewy\StartMenuExperienceHost.exe      N/A      |
|    0   N/A  N/A     10048    C+G   C:\Program Files\tidalab\91办公.exe         N/A      |
|    0   N/A  N/A     10124    C+G   C:\Windows\explorer.exe                     N/A      |
|    0   N/A  N/A     10432    C+G   ...crosoft\Edge\Application\msedge.exe      N/A      |
|    0   N/A  N/A     11440    C+G   ...oration\NvContainer\nvcontainer.exe      N/A      |
|    0   N/A  N/A     12508    C+G   ...nt.CBS_cw5n1h2txyewy\SearchHost.exe      N/A      |
|    0   N/A  N/A     13864    C+G   ...CBS_cw5n1h2txyewy\TextInputHost.exe      N/A      |
|    0   N/A  N/A     15840    C+G   ...5n1h2txyewy\ShellExperienceHost.exe      N/A      |
|    0   N/A  N/A     16468    C+G   ...oogle\Chrome\Application\chrome.exe      N/A      |
|    0   N/A  N/A     17060    C+G   ...wekyb3d8bbwe\XboxGameBarWidgets.exe      N/A      |
|    0   N/A  N/A     17176    C+G   ...t.LockApp_cw5n1h2txyewy\LockApp.exe      N/A      |
|    0   N/A  N/A     17416    C+G   ... Synapse 3 Host\Razer Synapse 3.exe      N/A      |
|    0   N/A  N/A     18088    C+G   ...GeForce Experience\NVIDIA Share.exe      N/A      |
|    0   N/A  N/A     18236    C+G   ...__8wekyb3d8bbwe\Notepad\Notepad.exe      N/A      |
|    0   N/A  N/A     18404    C+G   ...__8wekyb3d8bbwe\WindowsTerminal.exe      N/A      |
|    0   N/A  N/A     19236    C+G   ...ekyb3d8bbwe\PhoneExperienceHost.exe      N/A      |
|    0   N/A  N/A     19544    C+G   ...on\wallpaper_engine\wallpaper64.exe      N/A      |
|    0   N/A  N/A     19976    C+G   ...B\system_tray\lghub_system_tray.exe      N/A      |
|    0   N/A  N/A     20532    C+G   ...ces\Razer Central\Razer Central.exe      N/A      |
|    0   N/A  N/A     23832    C+G   ...nipaste-2.8.5-Beta-x64\Snipaste.exe      N/A      |
|    0   N/A  N/A     24748    C+G   ...9\extracted\runtime\WeChatAppEx.exe      N/A      |

检查pytorch版本以及能否检测到gpu

import torch
print(torch.cuda.is_available())
print(torch.__version__)
True
2.0.0+cu118

检查pytorch和cuda的兼容性

https://pytorch.org/get-started/locally/

  • 6
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
运行时错误:Torch无法使用GPU;添加--skip-torch-cuda-testCOMMANDLINE_ARGS变量以禁用此检查。对于解决这个问题,你可以尝试以下步骤: 1. 首先,请确保你的系统具备使用GPU的条件。检查你的计算机是否已安装并正确配置了CUDA驱动程序以支持GPU计算。 2. 如果你确定你的系统满足使用GPU的条件,那么你可以按照的建议在命令行中运行相应的命令。运行命令:torch_command = os.environ.get('TORCH_COMMAND', "pip install torch==1.12.1 cu102 torchvision==0.13.1 cu102 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu102") 。 3. 如果上述步骤无法解决问题,你可以尝试注释或删除错误提示中的那句话,并根据你自己的版本设置相应的语句。可以参考中提供的代码示例。 4. 如果以上步骤仍然无效,你可以尝试在运行webui-user.bat时添加--skip-torch-cuda-test参数,以禁用TorchGPU的检查。这可以通过在命令行中输入命令:webui-user.bat --skip-torch-cuda-test来实现。 请注意,上述步骤是根据引用资料中提供的信息给出的,可能因实际情况而有所不同。如果问题仍然存在,建议你查阅Torch官方文档或在相关论坛上寻求帮助。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* [解决在Windows安装stable diffusion遇到“Torch is not able to use GPU”的问题](https://blog.csdn.net/hcaohr/article/details/130122398)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 50%"] - *2* *3* [部署Stable diffusion遇到的一个问题解决](https://blog.csdn.net/MARNieR/article/details/129830832)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值